Search Results

Search found 7881 results on 316 pages for 'snmp dev'.

Page 267/316 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Should tripwire be entering /proc?

    - by dsadinoff
    When initializing the db with tripwire --init it spat out a bunch of errors pertaining to /proc: ### Warning: File system error. ### Filename: /proc/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: Duplicate object encountered. ### /proc/sys/net/ipv6/neigh This feels like noise. The twpol.txt file has the following clause: # # Critical devices # ( rulename = "Devices & Kernel information", severity = $(SIG_HI), ) { /dev -> $(Device) ; /proc -> $(Device) ; } Which, if I understand it right, is going to cause tripwire to care deeply about the entire contents of /proc. Shouldn't it just care about the static parts of /proc like the drivers and such, and not the per-pid stuff? Why does it ship like this?

    Read the article

  • GIT : I keep having to merge my new branch

    - by mnml
    Hi, I have created a new branch and I'm working on it with others dev but for reasons when I want to push my new commits I always have to git merge origin/mynewbranch Otherwise I'm getting some errors: ! [rejected] mynewbranch -> mynewbranch (non-fast-forward) error: failed to push some refs to '[email protected]/repo.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. You asked me to pull without telling me which branch you want to merge with, and 'branch.mynewbranch.merge' in your configuration file does not tell me, either. Please specify which branch you want to use on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. If you often merge with the same branch, you may want to use something like the following in your configuration file: [branch "mynewbranch"] remote = <nickname> merge = <remote-ref> [remote "<nickname>"] url = <url> fetch = <refspec> See git-config(1) for details. Why is it not automatic? Thanks

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • Zsh super slow inside my Git repo

    - by Jason Swett
    My Zsh is super slow inside a certain Git repo of mine. When I Google "zsh git slow", I get a bunch of results about Git autocompletion being slow, but autocompletion isn't necessarily my problem; it's everything. I tried removing all plugins and that, strangely, didn't do anything at all when I opened a new shell. Zsh would still do Git stuff inside my Git repo. I found this snippet on this page: function git_prompt_info() { ref=$(git symbolic-ref HEAD 2> /dev/null) || return echo "$ZSH_THEME_GIT_PROMPT_PREFIX${ref#refs/heads/}$ZSH_THEME_GIT_PROMPT_SUFFIX" } That made everything fast again, but it also gave me a prompt that looks like this: ? snip git:(master Note the missing right parenthesis. That's kind of lame. Plus the whole thing just seems like a hack I shouldn't have to do. There's also this promising-looking SU question, but the links on the accepted answer are dead. How can I get my Zsh not to be slow inside a Git repo?

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • Making "default saved" work with GRUB2...?

    - by baltusaj
    I just installed Moblin Operating System. It's using GRUB2. On my Ubuntu 8.04 GRUB 0.97 was being used in which i was using the default saved option comfortably. I found that with GRUB2 i should not edit /boot/grub/menu.lst directly but I did :) because my Moblin does not contain any /etc/default/grub where they say I should do the modification I want. So what I did is as following which did not work: default=saved timeout=1 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz #hiddenmenu #silent title Moblin (2.6.31.5-10.1.moblin2-netbook) root (hd0,0) kernel /boot/vmlinuz-2.6.31.5-10.1.moblin2-netbook ro root=/dev/sda1 vga=current savedefault=1 title Pathetic Windows rootnoverify (hd0,1) chainloader +1 savedefault=0 By doing so I should have automatically switch between Moblin and Window at each boot but it's not working. Almost all the troubleshooters on internet are saying that I should enable the DEFAULT=save option in /etc/default/grub but I am unable to find this file. Any idea what else should I do? Thanks a lot Update: I used the equal to sign because by default my menu.lst had an entry as default=0. However, default 0, is also working fine. Moreover the menu.lst, i have is actually a symbolic link to ./grub.conf. I have also noticed that grub-intall and grub-set-default commands are not working.

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • How to access a port via OpenVpn only

    - by Andy M
    I've set up an openvpn server alongside an apache website that can only be accessed on port 8100 on the same machine. My /etc/openvpn/server.conf file looks like this: port 1194 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/server.crt key ./easy-rsa2/keys/server.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem # Diffie-Hellman parameter server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt # make sure clients can still connect to the internet push "redirect-gateway def1 bypass-dhcp" keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 Now I tried to let only clients connected to the vpn network access the website on apache via port 8100. So I defined a few iptables rules: #!/bin/sh # My system IP/set ip address of server SERVER_IP="192.168.0.2" # Flushing all rules iptables -F iptables -X # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP # Allow incoming access to port 8100 from OpenVPN 10.8.0.1 iptables -A INPUT -i tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # outgoing http iptables -A OUTPUT -o tun0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i tun0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT Now when I connect to the server from my client computer and try to access the website on 192.168.0.2:8100, my browser can't open it. Will I have to forward traffic from tun0 to eth0? Or is there anything else I'm missing?

    Read the article

  • Windows recovery partition with GRUB2

    - by Actorclavilis
    So I recently got a new Toshiba laptop and installed Ubuntu 12.04 on it. Since it is a "Windows 7 Enabled" machine or some other proprietary nonsense like that, a few hardware features are designed only to work with W7. Eventually I found a way to enable these hardware functions by booting into the W7 recovery disc; however, they sporadically stop working. I'm moderately surprised that I was able to get anything to work at all, so I don't especially want to spend more time fixing the problems in a different fashion. Now I don't actually own the recovery disc; it's my father's. Since it's a pain to have to go asking for the disc every time the features stop working, I made an image of the disc and was hoping to make a 'recovery' partition like some computers have. However, unetbootin and GRUB2 both want a kernel and initrd to point to on startup, and something like set root=(hd0,1) loopback lo /w7r.iso set root=(lo) chainloader +1 in the spirit of the makeactive/ chainloader +1 commands that I used to use to dual-boot Linux and Windows simply gives me a file-not-found error. My question, therefore, is: Is it possible to, having written a Windows iso to a partition (such as with dd if=w7r.iso of=/dev/sda4) to a partition, convince GRUB2 to boot from it? Thanks in advance.

    Read the article

  • Heartbeat/DRBD failover didn't work as expected. How do I make the failover more robust?

    - by Quinn Murphy
    I had a scenario where a DRBD-heartbeat set up had a failed node but did not failover. What happened was the primary node had locked up, but didn't go down directly (it was inaccessible via ssh or with the nfs mount, but it could be pinged). The desired behavior would have been to detect this and failover to the secondary node, but it appears that since the primary didn't go full down (there is a dedicated network connection from server to server), heartbeat's detection mechanism didn't pick up on that and therefore didn't failover. Has anyone seen this? Is there something that I need to configure to have more robust cluster failover? DRBD seems to otherwise work fine (had to resync when I rebooted the old primary), but without good failover, it's use is limited. heartbeat 3.0.4 drbd84 RHEL 6.1 We are not using Pacemaker nfs03 is the primary server in this setup, and nfs01 is the secondary. ha.cf # Hearbeat Logging logfacility daemon udpport 694 ucast eth0 192.168.10.47 ucast eth0 192.168.10.42 # Cluster members node nfs01.openair.com node nfs03.openair.com # Hearbeat communication timing. # Sets the triggers and pulse time for swapping over. keepalive 1 warntime 10 deadtime 30 initdead 120 #fail back automatically auto_failback on and here is the haresources file: nfs03.openair.com IPaddr::192.168.10.50/255.255.255.0/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 nfs nfslock

    Read the article

  • Puppet variables best practice, generalise or specialise?

    - by Andrei Serdeliuc
    I'm trying to figure out which things should be in git within the puppet manifest and which should be in env vars like FACTER_my_var and use that in the manifest instead. Scenario: you are deploying 3 php apps and you've already built all the layers up to the app in other manifests (base system, php extensions, users, etc), and all that's left is installing the correct app (from an apt repo) and creating a vhost. I'm tempted to have something along the lines of: apache::vhost { $::project_hostname: priority => '10', port => '80', docroot => $::project_document_root, logroot => "/var/log/apache2/${$::project_name}", serveradmin => '[email protected]', require => Package[httpd], ssl => false, override => 'all', setenv => ["APP_KERNEL dev"] } This would run on each server, and the FACTER_project_* vars would be set on a per server basis. An obvious restriction of this would be that you can't run more than one app with this specific example. Or would you rather have project_x.pp, project_y.pp which have hardcoded paths and names?

    Read the article

  • After connecting wlan0 to bridge interface (and then removing it), can't connect to AP

    - by gmonk
    I'm on a laptop running Debian Jessie with kernel 3.13-1-amd64; lspci shows that my wireless NIC + driver is 04:00.0 Network controller: Intel Corporation Wireless 3160 (rev 83) Subsystem: Intel Corporation Dual Band Wireless-AC 3160 Kernel driver in use: iwlwifi This has been working without any problems, until I tried creating a bridge for lxc containers to use. I did the same thing as this person here: How-to set up a network bridge on a laptop for LXC use? -- and ended up having the same problem as this poster did, so I decided to "undo" my actions. This hasn't been successful. Actions taken so far: To configure the bridge: #> ip link add type veth #> iw dev wlan0 set 4addr on #> ifconfig veth0 up #> brctl addbr br0 #> brctl addif br0 wlan0 #> brctl addif br0 veth0 #> ifconfig br0 192.168.0.4/24 #> ifconfig wlan0 0.0.0.0 To "deconfigure": #> brctl delif br0 wlan0 #> brctl delif br0 veth0 #> iw dev wlan0 set 4addr off #> ifconfig veth0 down #> ifconfig wlan0 down #> ifconfig br0 down #> brctl delbr br0 Now, dmesg and /var/log/syslog show repeated attempts at connecting to the AP that was working before, which fail after authentication: May 27 09:16:01 myhostname kernel: [11350.757172] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:01 myhostname kernel: [11350.759036] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:01 myhostname kernel: [11350.762615] wlan0: authenticated May 27 09:16:01 myhostname kernel: [11350.762753] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.762755] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.765080] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:01 myhostname kernel: [11350.767474] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:01 myhostname kernel: [11350.767476] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:01 myhostname kernel: [11350.788475] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:02 myhostname dhclient: DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 14 May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname kernel: [11354.559579] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.561458] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> associating May 27 09:16:04 myhostname kernel: [11354.563445] wlan0: authenticated May 27 09:16:04 myhostname kernel: [11354.563631] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.563633] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.565727] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.568091] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:04 myhostname kernel: [11354.569030] wlan0: associated May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:05 myhostname kernel: [11354.978204] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:05 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:05 myhostname kernel: [11354.992729] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:05 myhostname kernel: [11354.995004] cfg80211: World regulatory domain updated: May 27 09:16:05 myhostname kernel: [11354.995005] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:05 myhostname kernel: [11354.995006] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995008] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995009] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995010] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.763968] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:09 myhostname kernel: [11358.765796] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.769957] wlan0: authenticated May 27 09:16:09 myhostname kernel: [11358.770102] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770104] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770846] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname kernel: [11358.773358] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:09 myhostname kernel: [11358.773361] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:09 myhostname kernel: [11358.802187] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.573442] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.575270] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.580334] wlan0: authenticated May 27 09:16:12 myhostname kernel: [11362.580503] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.580516] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.583508] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.585908] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:12 myhostname kernel: [11362.586781] wlan0: associated May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:13 myhostname kernel: [11362.947693] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:13 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:13 myhostname kernel: [11362.973461] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:13 myhostname kernel: [11362.975673] cfg80211: World regulatory domain updated: May 27 09:16:13 myhostname kernel: [11362.975675] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:13 myhostname kernel: [11362.975676] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975677] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0/wireless): association took too long. May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: config -> failed (reason 'no-secrets') [50 120 7] May 27 09:16:14 myhostname NetworkManager[13992]: <info> Marking connection 'Auto myaccesspoint' invalid. May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0) failed for connection 'Auto myaccesspoint' May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): deactivating device (reason 'none') [0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> disconnected The things that jump out at me are "deauthenticating ... by local choice( reason=3)" and the lines that contain "(reason=15)". I've tried various fixes: iwconfig wlan0 power off killing wpa_supplicant connecting with iwconfig + dhclient instead of gnome's network -manager explicitly configuring wlan0 in /etc/network/interfaces creating a /etc/wpa_supplicant.conf file ...but nothing seems to work. I'm not sure what I did wrong, or what step I've skipped in trying to get wlan0 back as a non-bridged device -- I removed it from the bridge and then deleted the bridge itself. Any ideas?

    Read the article

  • Joining new DC to AD - DNS name does not exist

    - by Andrew Connell
    I had a DC fail on me recently and trying to add a new one to my domain, although I'm sensing I might have other issues in my domain. I'm a dev at heart and know just enough about AD to be dangerous so looking for some assistance. My working DC is RIVERCITY-DC12. I'm trying to promote RIVERCITY-DC14 as a DC to the RIVERCITY domain, but when I run DCPROMO, at the NETWORK CREDENTIALS step where I point to the name of the domain (rivercity.local), I get "An AD DC for the domain rivercity.local cannot be contacted" and in the details see "The error was DNS name does not exist" Looking at RIVERCITY-DC12, I can see DNS is working, I've been able to query it from other machines in my domain, and no errors are reported in the DNS category within the Event Viewer. When I checked the FMSO roles, it shows RIVERCITY-DC12 is the machine for all listed roles. Not sure what I should do next or how to troubleshoot/investigate after searching around for a solution... ideas? Environment: Domain: rivercity (rivercity.local) Forest functional level: Windows 2000 (I'm more than happy to raise this) Windows Server 2008 All servers are Windows Server 2008 R2 SP1 (fully patched)

    Read the article

  • Mounted HDD not having enough permissions from Apache/PHP

    - by Dan
    Piwigo gallery, on apache and php, CentOS 6. The root system is a RAID 128GB. /var/www/html is on the root file system. Mounted the 320GB hdd to /var/www/html/320 using defaults, it's an ext4 fs. Put a symlink to it in /var/www/html/galleries which is read by the gallery script so I can upload images to there, then click sync. It gives me the error: [./galleries/] PWG-ERROR-NO-FS (File/directory read error) PWG-ERROR-NO-FS: The file or directory cannot be accessed (either it does not exist or the access is denied) chmod 777 set on /dev/sdb1, /var/www/html, and /var/www/html/320 as well as the symlink galleries too. All recursive. chown apache:apache to everything too. PHP just can't read/write to it. I tried with and without the symlink, I've tried everything I can think of. Nothing. Any ideas how I can give apache/php permission to read/write to this drive? With 777 permissions all around it should already be able to.

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Small office network setups

    - by user39822
    I work at a small office and we're overhauling our network setup there. We're a web dev company and at the moment we have 50+ production sites running on the same machine that runs our internal email, which is just plain stupid. We're moving all our client hosting off site and are now looking for something to run our internal office requirement. Below is a brain dump: Equal amount of Mac & PC, about 25 machines in total. We need a central "server" to host files that should be accessible everyone as a "network drive". If possible we'd like to use low cost hardware for this (Mac or Win based). Disk space should be upward of 1TB. Ideally we should also be able to run a small web server on this machine (LAMP stack) to run some planning and billing applications we wrote ourselves. We need some sort of MS Exchange alternative for things like a shared calendar and especially being able to set Out of Office replies. We have one printer that is connected to the network Setup should be something can preferably be managed easily via a graphical interface and NOT require command line skills. Users want to keep using Apple Mail or MS Outlook After a quick google I came across the Zimbra collaboration suite, can anyone recommend this or any other solution for our office?

    Read the article

  • using gmail as email relay for sendmail

    - by Nikita
    I used to be able to send emails using a gmail account & sendmail configured using one of the guides on the Internet, for example: http://appgirl.net/blog/configuring-sendmail-to-relay-through-gmail-smtp/ This is a small server and I've recently moved it to a different house. And sendmail has stop working. The only thing different in the network setup is a new router. What is happening: In the log files, I see the following error: ...stat=Deferred: smtp.gmail.com: No route to host When I run from the command line: strace sendmail -f A -t B -u "Subject" -m "Message" -tls=yes ssl=yes -s smtp.gmail.com:587 -xu A -xp XYZ It hangs on this call: recvfrom(3, "m0\201\203\0\1\0\0\0\0\0\0\4ares\3lan\0\0\34\0\1", 8192, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("192.168.1.254")}, [16]) = 26 close(3) = 0 time(NULL) = 1339997943 open("/etc/localtime", O_RDONLY) = 3 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 fstat64(3, {st_mode=S_IFREG|0644, st_size=3477, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ff000 read(3, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 3477 _llseek(3, -24, [3453], SEEK_CUR) = 0 read(3, "\nEST5EDT,M3.2.0,M11.1.0\n", 4096) = 24 close(3) = 0 munmap(0xb76ff000, 4096) = 0 socket(PF_FILE, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 send(3, "<18>Jun 18 01:39:03 sendmail[268"..., 96, MSG_NOSIGNAL) = 96 nanosleep({60, 0}, So it looks like at some point it tries to resolve the DNS name, but I don't have anything running on 53, so it dies out and then just hangs. The other interesting thing is that msmtp works just fine on the same server. Update: ares in strace output is actually the name of my server, but .254 IP address is the address of the router. Could anyone tell me why this is happening or what further steps can I take to investigate the issue? Thanks!

    Read the article

  • CentOS 5 VPN Server won't work

    - by Miro Markarian
    I have a CentOS 5 server configured to be both a L2TP server and a PPTP server + a radius server for hosting the AAA. My problem is that, the L2TP works great and I can connect to it, but can't connect to PPTP and every-time it ends up with error #619 when it gets to the verifying username and password section. Here is the log I got from /var/log/messages Dec 17 07:40:02 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection started Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Starting call (launching pppd, opening GRE) Dec 17 07:40:03 serverdl pppd[8571]: Plugin radius.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADIUS plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin radattr.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: RADATTR plugin initialized. Dec 17 07:40:03 serverdl pppd[8571]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 17 07:40:03 serverdl pppd[8571]: pptpd-logwtmp: $Version$ Dec 17 07:40:03 serverdl pppd[8571]: pppd 2.4.4 started by root, uid 0 Dec 17 07:40:03 serverdl pppd[8571]: Using interface ppp0 Dec 17 07:40:03 serverdl pppd[8571]: Connect: ppp0 <--> /dev/pts/2 Dec 17 07:40:03 serverdl pptpd[8570]: GRE: read(fd=7,buffer=80515e0,len=8260) from network failed: status = -1 error = Protocol not available Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: GRE read or PTY write failed (gre,pty)=(7,6) Dec 17 07:40:03 serverdl pppd[8571]: Modem hangup Dec 17 07:40:03 serverdl pppd[8571]: Connection terminated. Dec 17 07:40:03 serverdl pppd[8571]: Exit. Dec 17 07:40:03 serverdl pptpd[8570]: CTRL: Client 5.52.247.62 control connection finished Just yesterday when I hadn't set up the L2TP yet PPTP was working great but then I uninstalled it and removed all it's config from /etc/* and installed L2TP first and then installed PPTP after it. and then it stopped to work. I believe it must be a radiusclient issue because both of the PPTP and L2TP services use radius to authenticate. And another thing I think must be the issue is that when assigning IPs to the PPP interfaces, I have done the following config. Is that right? For L2TP: localip 10.10.10.1 remoteip 10.10.10.2-254 For PPTP: localip 10.10.9.1 remoteip 10.10.9.2-254

    Read the article

  • How do I mount an external USB hard drive on my Sheevaplug?

    - by James
    I've acquired a Sheevaplug running - I think - Ubuntu. I'd like to mount an external USB hard drive, but I don't know the name of the device that needs mounting. When I list the devices under /dev, a long list is produced. How do I find out which device listed needs to be mounted? Update: When I run dmesg after plugging the device in, I see the following at the end: usb 1-1: new high speed USB device using ehci_marvell and address 6 usb 1-1: device not accepting address 6, error -71 usb 1-1: new high speed USB device using ehci_marvell and address 7 usb 1-1: device not accepting address 7, error -71 usb 1-1: new high speed USB device using ehci_marvell and address 8 usb 1-1: device not accepting address 8, error -71 usb 1-1: new high speed USB device using ehci_marvell and address 9 usb 1-1: device not accepting address 9, error -71 And when I view /var/log/messages, I can see this: Sep 23 21:26:03 debian kernel: usb 1-1: new high speed USB device using ehci_ma$ Sep 23 21:26:04 debian kernel: usb 1-1: new high speed USB device using ehci_ma$ Sep 23 21:26:05 debian kernel: usb 1-1: new high speed USB device using ehci_ma$ Sep 23 21:26:05 debian kernel: usb 1-1: new high speed USB device using ehci_ma$ Unfortunately, I don't know what these mean.

    Read the article

  • SELinux - Allow multiple services access to same /home/dir

    - by Mike Purcell
    I currently have SELinux enabled and have been able to configure apache to allow access to /home/src/web with a chcon command granting the 'httpd_sys_content_t' type. But now I am trying to serve the rsyslogd.conf file from the same directory, but every time I start rsyslogd I see an entry in my audit log saying that rsyslogd was denied access. My question is, is it possible to grant two applications the ability to access the same directory, while still keeping SELinux enabled? Current perms on /home/src: drwxr-xr-x. src src unconfined_u:object_r:httpd_sys_content_t:s0 src Audit log message: type=AVC msg=audit(1349113476.272:1154): avc: denied { search } for pid=9975 comm="rsyslogd" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:syslogd_t:s0 tcontext=system_u:object_r:home_root_t:s0 tclass=dir type=SYSCALL msg=audit(1349113476.272:1154): arch=c000003e syscall=2 success=no exit=-13 a0=7f9ef0c027f5 a1=0 a2=1b6 a3=0 items=0 ppid=9974 pid=9975 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="rsyslogd" exe="/sbin/rsyslogd" subj=unconfined_u:system_r:syslogd_t:s0 key=(null) -- Edit -- Came across this post, which is sort of what I am trying to accomplish. However when I viewed the list of allowed sebool params, the only relating to syslog was: syslogd_disable_trans (SELinux Service Protection), seems like I can maintain the current SELinux 'type' on the /home/src/ dir, but set the bool on syslogd_disable_trans to false. I wonder if there is a better approach?

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >