Search Results

Search found 24211 results on 969 pages for 'shell command'.

Page 431/969 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • L2TP over IPSec VPN with OpenSwan and XL2TPD can't connect, timeout on Centos 6

    - by Disco
    I'm setting up LT2p over IPSec on my Centos 6.3 fresh install. I have iptables flushed, permit all. Whenever I try to connect, i get a 'no reply from vpn' and nothi Here's my ipsec.conf file (Server is 1.2.3.4) : config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=1.2.3.4 leftprotoport=17/1701 right=%any rightprotoport=17/%any My /etc/ipsec.secrets 1.2.3.4 %any: PSK "password" My sysctl.conf (appened lines) net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 Here's what 'ipsec verify' gives: # ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.32/K2.6.32-279.19.1.el6.x86_64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] And I see xl2tpd is listening on 1701/udp : udp 0 0 1.2.3.4:1701 0.0.0.0:* 2096/xl2tpd

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • How to replace the domain name in a Wordpress database?

    - by Cristian
    I have a Wordpress database which was installed in a development environment... thus, all references to the site itself have a fixed IP address (say 192.168.16.2). Now, I have to migrate that database to a new Wordpress installation on a hosting. The problem is that the SQL dump contains a lot of references to the IP address, and I have to replace it with: my_domain.com. I could use sed or some other command to change the that from the command line, the problem is that there are a lot of configuration data which uses JSON. So what? Well, as you know, JSON arrays uses things like: s:4: to know how many chars an element has, and thus, if I just replace the IP with the domain name, the configuration files will get corrupted. I used an app for Windows some years ago that allows to change values in a database and takes care of the JSON arrays. Unfortunately, I forgot the name of the app... so the question is: do you know any app that allows me to do what I want?

    Read the article

  • The Network folder specified is currently mapped using a different user name and password

    - by Frank Thornton
    I have a NAS device, it has 3 shares. On one computer I have access to all 3 of the shares. On another computer I keep getting this error when try and add a 2nd one. The Network folder specified is currently mapped using a different user name and password [...] That is the message I keep getting. What causes that? EDIT: Every share has it's own username and password. EDIT: NET USE on the one running 3 from the same NAS device New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK T: \\192.168.2.5\SHARE1 Microsoft Windows Network OK X: \\Nas-1dsho-abc\SHARE2 Microsoft Windows Network Disconnected Y: \\192.168.2.9\backups Microsoft Windows Network OK Z: \\Nas-1dsho-abc\cbackups Microsoft Windows Network The command completed successfully. NET USE on the other: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Y: \\192.168.2.5\SHARE1 Microsoft Windows Network Unavailable Z: \\192.168.2.5\SHARE2 Microsoft Windows Network The command completed successfully.

    Read the article

  • Move 53,800+ files into 54 separate folders with ~1000 files each?

    - by ane
    Trying to import 53,800+ individual files (messages) using Gmail's POP fetcher. Gmail understandably refuses, giving the error: "Too many messages to download. There are too many messages on the other server." The folder in question looks like similar to: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S /usr/home/customer/Maildir/cur/1203677194.V57I586f26M688004.mail.net:2,S /usr/home/customer/Maildir/cur/1203679158.V57I586f2bM182864.mail.net:2,S /usr/home/customer/Maildir/cur/1203680493.V57I586f33M740378.mail.net:2,S /usr/home/customer/Maildir/cur/1203685837.V57I586f0bM835200.mail.net:2,S /usr/home/customer/Maildir/cur/1203687920.V57I586f65M995884.mail.net:2,S ... Using the shell (tcsh, sh, etc. on FreeBSD), what one-line command can I type to split this directory full of files into separate folders so Gmail only sees 1000 messages at a time? Something with find or ls | xargs mv maybe. Whatever is fastest. The desired output directory would now look something like: /usr/home/customer/Maildir/cur/1203672790.V57I586f04M867101.mail.net:2,S /usr/home/customer/Maildir/cur/1203676329.V57I586f22M520117.mail.net:2,S ... /usr/home/customer/set1/ (contains messages 1-1000) /usr/home/customer/set2/ (contains messages 1001-2000) /usr/home/customer/set3/ (etc.) Ideally, cron could run another command to automatically reverse the process in 1000 message increments every hour. So Gmail only sees & downloads 1000 at a time.

    Read the article

  • Launchd item no longer firing in Snow Leopard

    - by ridogi
    A launchd item that was working in 10.5 is no longer working after my upgrade to 10.6. I am running 10.6.2 and I have recreated the launchd item and given it a new name and that one doesn't run either. I have found a link of people with the same problem on google groups but none of the advice in that link helps. My launchd item is not listed in /private/var/db/launchd.db/com.apple.launchd/overrides.plist or in any of the overrides.plist files in the subdirectories of /private/var/db/launchd.db/ I have also tried to set this up as both a user agent and a user daemon. My launchd item simply runs a shell script, which I have no problem launching manually. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.eric.tmnotify.launchd</string> <key>ProgramArguments</key> <array> <string>/<path_to>/tmnotify.sh</string> </array> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> I have tried to load it by overriding the disabled key (even though it is not disabled in any of the overrides.plist files) with both: sudo launchctl load -F /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist sudo launchctl load -w /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist and after running either of them I can see that it is running by using sudo launchctl list but the shell script never fires. Edit: I have also put this in the formerly blank file at /private/var/db/launchd.db/com.apple.launchd.peruser.501/overrides.plist : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> </dict> </plist> I also tried inserting this alphabetically: <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> into the file /private/var/db/launchd.db/com.apple.launchd/overrides.plist but still no dice.

    Read the article

  • Bridging Wireless and Wired Interfaces in Linux

    - by The Daemons Advocate
    My network setup is something like: Wireless Router <---> Netbook <---> Ubuntu Desktop ...or, more verbosely (with interfaces): Wireless Router <--(wireless)--> (eth2) Ubuntu Netbook Ubuntu Netbook (eth0) <---(wired)----> (eth0) Ubuntu Desktop In a perfect world, I'd have the desktop wired, but weird circumstances combined with my wanting to understand more about networking in linux make me want to figure out how to bridge these two devices. A bit of googling has given me this example using bridge-utils, and here's how I'm (failing) to setup the bridge (on the netbook): sudo -i ifconfig eth0 0.0.0.0 ifconfig eth2 0.0.0.0 brctl addbr bridget brctl addif bridget eth0 brctl addif bridget eth2 ifconfig bridget up ...then, trying to make sure that the netbook can still get on the internets... route add default gateway 192.168.2.1 dhclient bridget What happens after this is that the dhclient command above (netbook) doesn't get served an IP, and the Desktop, if I run dhclient, it doesn't get served an IP. Some weird considerations might be that I'm running the Network Manager Applet that comes with Ubuntu. While I'm sure I can get a command line wireless configuration setup, it's a bit complex. Can someone give me a shout as to where I'm going wrong? I'd also like to note another related question titled 'Bridging my laptop’s wireless and wired adaptors', however the setup is different to mine.

    Read the article

  • SSH multi-hop connections with netcat mode proxy

    - by aef
    Since OpenSSH 5.4 there is a new feature called natcat mode, which allows you to bind STDIN and STDOUT of local SSH client to a TCP port accessible through the remote SSH server. This mode is enabled by simply calling ssh -W [HOST]:[PORT] Theoretically this should be ideal for use in the ProxyCommand setting in per-host SSH configurations, which was previously often used with the nc (netcat) command. ProxyCommand allows you to configure a machine as proxy between you local machine and the target SSH server, for example if the target SSH server is hidden behind a firewall. The problem now is, that instead of working, it throws a cryptic error message in my face: Bad packet length 1397966893. Disconnecting: Packet corrupt Here is an excerpt from my ~/.ssh/config: Host * Protocol 2 ControlMaster auto ControlPath ~/.ssh/cm_socket/%r@%h:%p ControlPersist 4h Host proxy-host proxy-host.my-domain.tld HostName proxy-host.my-domain.tld ForwardAgent yes Host target-server target-server.my-domain.tld HostName target-server.my-domain.tld ProxyCommand ssh -W %h:%p proxy-host ForwardAgent yes As you can see here, I'm using the ControlMaster feature so I don't have to open more than one SSH connection per-host. The client machine I tested this with is an Ubuntu 11.10 (x86_64) and both proxy-host and target-server are Debian Wheezy Beta 3 (x86_64) machines. The error happens when I call ssh target-server. When I call it with the -v flag, here is what I get additionally: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /home/aef/.ssh/config debug1: Applying options for * debug1: Applying options for target-server.my-domain.tld debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: auto-mux: Trying existing master debug1: Control socket "/home/aef/.ssh/cm_socket/[email protected]:22" does not exist debug1: Executing proxy command: exec ssh -W target-server.my-domain.tld:22 proxy-host.my-domain.tld debug1: identity file /home/aef/.ssh/id_rsa type -1 debug1: identity file /home/aef/.ssh/id_rsa-cert type -1 debug1: identity file /home/aef/.ssh/id_dsa type -1 debug1: identity file /home/aef/.ssh/id_dsa-cert type -1 debug1: identity file /home/aef/.ssh/id_ecdsa type -1 debug1: identity file /home/aef/.ssh/id_ecdsa-cert type -1 debug1: permanently_drop_suid: 1000 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.0p1 Debian-3 debug1: match: OpenSSH_6.0p1 Debian-3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1 Debian-7ubuntu1 debug1: SSH2_MSG_KEXINIT sent Bad packet length 1397966893. Disconnecting: Packet corrupt

    Read the article

  • Cannot open simple script application on mac

    - by streetpc
    Mac OS X 10.6 I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, COM, or otherwise (even Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • No clue for high load average on top

    - by Oz.
    We have several machines on Amazon (ec2) of the type c1.xlarge with 16 cpus, running the Amazon AMI. Details on the machine: 7 GB of memory 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each) 1690 GB of instance storage 64-bit platform I/O Performance: High API name: c1.xlarge One out of the several machines is showing a high load average, since we have run the last yum upgrade a couple of weeks a go. We did not yet update the other machines, and everything looks normal on them. The strange thing is that the top command not showing any hint for the cause of the load. CPUs are 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st(see below). Mem is about 1.5GB free. Any idea what could it be, or where else can we check? Many thanks for the help. # # top # top - 07:57:42 up 4:18, 1 user, load average: 1.36, 1.45, 1.47 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu(s): 4.8%us, 1.1%sy, 0.0%ni, 94.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7120092k total, 5644920k used, 1475172k free, 532888k buffers Swap: 0k total, 0k used, 0k free, 3463936k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1557 mysql 20 0 1829m 374m 6448 S 14.3 5.4 11:15.09 mysqld 6655 apache 20 0 416m 49m 3744 S 9.3 0.7 0:04.85 httpd 27683 apache 20 0 421m 54m 3708 S 9.0 0.8 0:00.99 httpd 6682 apache 20 0 424m 57m 3788 S 8.3 0.8 0:03.81 httpd 16816 apache 20 0 419m 51m 3760 S 4.3 0.7 0:04.09 httpd 22182 apache 20 0 417m 50m 3756 S 1.7 0.7 0:06.34 httpd 219 root 20 0 0 0 0 S 0.3 0.0 0:00.34 kworker/7:1 699 root 20 0 0 0 0 S 0.3 0.0 0:00.40 kworker/3:1 1 root 20 0 19376 1508 1212 S 0.0 0.0 0:00.29 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.71 ksoftirqd/0

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • SSH Private Key Not Working in Some Directories

    - by uesp
    I have a strange issue where SSH won't properly connect with a private-key if the key file is in certain directories. I've setup the keys on a set of servers and the following command ssh -i /root/privatekey [email protected] works fine and I login to the given host without getting prompted by a password, but this command: ssh -i /etc/keyfiles/privatekey [email protected] gives me a password prompt. I've narrowed it down that this behavior occurs in only some sub-directories of /etc/. For example /etc/httpd1/ gives me a password prompt but /etc/httpd/ does not. What I've checked so far: All private key files used are identical (copied from the original file). The private key file and directories used have identical permissions. No relevant error messages in the server/client logs. No interesting debug messages from ssh -v (it just seems to skip the key file). It happens with connecting to different hosts. After more testing it is not the actual directory name. For example: mkdir /etc/test cp /root/privatekey /etc/test ssh -i /etc/test/privatekey [email protected] # Results in password prompt cp /root/privatekey /etc/httpd # Existing directory ls -ald test httpd # drwxr-xr-x 4 root root 4096 Mar 5 18:25 httpd # drwxr-xr-x 2 root root 4096 Mar 5 18:43 test ssh -i /etc/httpd/privatekey [email protected] # Results in *no* prompt rm -r test cp -R /etc/httpd /etc/test ssh -i /etc/test/privatekey [email protected] # Results in *no* prompt` I'm sure its just something simple I've overlooked but I'm at a loss.

    Read the article

  • Connect Chrome to TOR

    - by Jack M
    I'm having difficulty connecting Chrome to TOR. I started trying yesterday. I started Vidalia and the TOR Browser and then followed the advice at http://lifehacker.com/5614732/create-a-tor-button-in-chrome-for-on+demand-anonymous-browsing - downloading Proxy Switchy and setting it up as stated. This resulted in Error 130 (net::ERR_PROXY_CONNECTION_FAILED) (in Chrome, when I tried to load a webpage). So I looked into Vidalia's settings and noticed that it appeared to be using port 9051, so I set that instead of 8118 as everyone on the internet seems to be suggesting. Then I got a new error: Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED). Digging a bit, I found that Tor should be set as a SOCKS proxy, not an HTTP proxy, so I unticked "use same settings for all protocols" in Proxy Switchy and just set localhost:9051 for SOCKS. That got me Error 7 (net::ERR_TIMED_OUT). And that's when I came here for help. I typed up the above question, but then at the last minute decided to do a bit more reading and found someone here suggested using some command line arguments via a Windows shortcut: "C:\snip\chrome.exe" --proxy-server=";socks=127.0.0.1:9051;sock4=127.0.0.1:9051;sock5=127.0.0.1:9051" --incognito check.torproject.org And that worked perfectly. Yesterday. Today it doesn't, so I'm having to post this question after all. check.torproject.org gives me a "no" with Chrome, but a "yes" with the default Tor Browser. I tried closing Chrome and restarting it (yes, with the correct shortcut) after Vidalia started, but still nothing. The port number hasn't changed or anything. What gives? EDIT: I realized I had a "non tor" instance of Chrome running and that possibly the was causing the command line args t be ignored when I started the new instance. Closed all instances of chrome and ran my Chrome Tor shortcut, and it did get rid of the "not using Tor" message -- because I got another Time Out error instead. Vidalia's bandwidth graph didn't even blink.

    Read the article

  • Home Sharing and Remote on iTunes causing firewall nags

    - by BoltClock
    It seems that enabling Home Sharing and/or hooking up my iPhone's Remote to iTunes causes Mac OS X Snow Leopard's firewall to freak out and keep nagging every time I launch iTunes to ask if I'd like it to accept incoming connections. If I turn off Home Sharing and forget all Remotes, the nag dialog no longer comes up. I could also disable the firewall, but I think that's a silly thing to do. iTunes is already in the firewall whitelist, so the only thing I know that could cause Mac OS X to nag is a bad application bundle code signature. I checked with this Terminal command: $ codesign -vvv /Applications/iTunes.app/ And sure enough, this is what it outputs: /Applications/iTunes.app/: a sealed resource is missing or invalid /Applications/iTunes.app/Contents/Resources/English.lproj/AutofillSettings.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/iTunesDJSettings.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/MobilePhonePrefs.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/MobilePhoneSetup.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/UniversalAccess.nib/objects.xib: resource added I've tried reinstalling iTunes as suggested by this answer, but Mac OS X still nags about incoming connections and the exact same output is generated when I run the above command again. On my PC, Windows Firewall has never nagged whenever I turn on Home Sharing and hook up Remote on my iPhone. Both computers use iTunes 9.2.1. My Mac runs Mac OS X 10.6.4. Is there anything special I need to do that I might have missed? Or how do I resolve the issue? EDIT: I've updated to iTunes 10, but the nags on my Mac are still there and only go away if I turn off Home Sharing and Remote. EDIT 2: I've updated to Remote 2.0 on my iPhone, but the firewall nags are persisting. Has anyone else had this firewall issue at all?

    Read the article

  • Why does hiberfil.sys come back from the dead on Windows 7?

    - by Corey White
    I have Windows 7 running on a small (40GB) partition, with 4GB ram. This means that the hiberfil.sys file created by Hibernate takes up a significant portion of the available diskspace. I would like to remove it. I am aware that I can disable Hibernate and remove hiberfil.sys by entering powercfg -h off in an elevated command prompt. This works -- the file is immediately removed, and after doing so, the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key is (correctly) set to 0. However, the next time I reboot the PC, hiberfil.sys returns from the dead, Hibernate is reenabled, and that registry key has returned to 1. I'm pretty much at my wits' end with this. Almost everything I can find online related to removing the hiberfil.sys file simply suggests using powercfg to turn off hibernation, and that appears to work for just about everyone. But it just keeps coming back for me! (Like a vampire, sucking up my disk space.) I did find one other thread from someone who seems to have had the same issue, but none of the suggestions there worked for the original poster (or for me). Still, I have tried everything listed there, including: Disabling hybrid sleep Disabling Hibernate through the command prompt, through the Power Options GUI, and through both (in both orders) Manually changing the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key Pretty much everything else I can think of! I do want to reiterate that I have no problem removing the file -- that works great. It just comes back after every reboot. I'm about ready to throw in the towel and just run a script on login to disable Hibernate each time, even though that seems like a crazily hacky "solution" . . . but I was hoping someone here could suggest something else, first. Thanks!

    Read the article

  • Updating Samba From RPMs

    - by KnickerKicker
    My Red Hat Enterprise Edition 4 comes with Samba Version 3.0.10, which does not have support for the "inherit owner" attribute that is essential in implementing a Deny-Delete Write Once Read Many share (for examples, search google for a-shared-drop-box-using-samba). (BTW, if any body knows an alternative way to do it without updating samba, I'm all ears!) I am not all that comfortable building from source, and after hours of googling (no, I do not have a red hat subscription, so I cannot just run the up2date command), I found a whole bunch of rpms on http://ftp.sernet.de/pub/samba/tested/rhel/4/i386/ (Samba 3.2.15 for RHEL 4)... Next, I tried updating them with the rpm -U --nodeps command, but I got file conflict errors. So I went ahead and overwrote everything (or so I thought) by using the rpm's --force option. But no good has come of all that. /usr/sbin/smbd -V still returns the old version. As of now, rpm -qa | grep samba returns, samba3-client-3.2.15-40.el4 samba-3.0.10-1.4E.2 samba-client-3.0.10-1.4E.2 system-config-samba-1.2.21-1 samba3-3.2.15-40.el4 samba-common-3.0.10-1.4E.2 samba3-winbind-3.2.15-40.el4 I cannot remove the older ones because samba-common >= 3.0.8-0.pre1.3 is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) kdebase-3.3.1-5.8.x86_64 libsmbclient.so.0()(64bit) is needed by (installed) gnome-vfs2-smb-2.8.2-8.2.x86_64 Now thats a whole bunch of dependencies that I dare not touch :) Any and all pointer are welcome at this stage. Thanks in advance!

    Read the article

  • Why java -version returning a different version than the one defined in JAVA_HOME?

    - by Shekhar
    I am trying to set JAVA_HOME in Ubuntu OS. I have copied jdk 1.7 in /usr/lib/jvm and set JAVA_HOME in /etc/profile file. Contents of /usr/lib/jvm folder are as follows : shekhar@ubuntu:~$ ls /usr/lib/jvm/ default-java java-1.6.0-openjdk java-6-openjdk java-6-openjdk-i386 jdk1.7.0_01 java-1.5.0-gcj-4.6 java-1.6.0-openjdk-i386 java-6-openjdk-common java-7-openjdk-i386 and last few lines of /etc/profile file are as follows : export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_01 export PATH=$PATH:$JAVA_HOME/bin After finishing all this when I run java -version command I get following output : shekhar@ubuntu:~$ java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK Server VM (build 20.0-b12, mixed mode) and when I run ls -lah command I get following output : shekhar@ubuntu:~$ ls -lah /usr/bin/java lrwxrwxrwx 1 root root 22 Sep 29 09:58 /usr/bin/java -> /etc/alternatives/java shekhar@ubuntu:~$ ls -lah /etc/alternatives/java lrwxrwxrwx 1 root root 45 Sep 29 09:58 /etc/alternatives/java -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java Can anyone please tell me which thing I am missing? Why Ubuntu is still pointing to open jdk and not to my jdk 7? PS : I have seen this similar question and its answers but that question is related to Windows OS and not for Ubuntu so I am reposting this similar question for Ubuntu.

    Read the article

  • Monitoring memcached with plink

    - by kojiro
    I need a telnet client that can take commands from a file or stdin so I can do some quick-and-dirty automatic monitoring of memcached. I thought plink would be good for this, but it seems to be doing something beyond what I need: If I telnet into localhost 11211 and write stats, I get the memcached stats, like so: $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats STAT pid 25099 STAT uptime 91182 STAT time 1349191864 STAT version 1.4.5 STAT pointer_size 64 STAT rusage_user 3.570000 STAT rusage_system 2.740000 STAT curr_connections 5 STAT total_connections 23 STAT connection_structures 11 STAT cmd_get 0 STAT cmd_set 0 STAT cmd_flush 0 STAT get_hits 0 STAT get_misses 0 STAT delete_misses 0 STAT delete_hits 0 STAT incr_misses 0 STAT incr_hits 0 STAT decr_misses 0 STAT decr_hits 0 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT auth_cmds 0 STAT auth_errors 0 STAT bytes_read 82184 STAT bytes_written 7210 STAT limit_maxbytes 67108864 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 4 STAT conn_yields 0 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 STAT reclaimed 0 END But with plink, I get an odd error. I'm using this command: watch -n 30 plink -v -telnet -P 11211 127.0.0.1 <<< $'\nstats' The first time through I get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA ERROR STAT pid 25099 STAT uptime 91245 STAT time 1349191927 STAT version 1.4.5 … END But when watch repeats the command I just get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA Failed to connect to 127.0.0.1: Connection reset by peer Connection reset by peer FATAL ERROR: Connection reset by peer What is plink doing here that is different from normal telnet? How should I be going about this? (I'm not married to plink, but I need a way to continuously send simple telnet commands to memcached without writing a full-fledged perl script.)

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • Keeping Xv Overlay configuration throughout an X session.

    - by kriss
    After upgrading my Linux system from Ubuntu 9.04 to Ubuntu 10.10, I suceeded correcting most problems (all related to Intel 82865G Integrated Graphics Adapter support and compiz is still not working but that's another matter) but for one I only have a partial solution. Whenever I play a video the colors are much too saturated. This is really a problem for tones of skins that appears reddish (everyone seems to be coming back from a ski vacation with deep sun burns). As this effect only occurs with videos, not with pictures, I finally figured out it was related to Video Overlays configuration and I can correct it typing: xvattr -a XV_SATURATION -v 120 This change the default saturation value, which is 500 and much too high in my case, at eye sight the correct value seems to be between 100 and 150. Now my problem is that I have to type the above command each time I run a video. If I type it before running the video it has no effect, if I close the video and open a new one, I have to type it again, etc. I tried to put it in Xsession and (logically) it has no effect either. How could I do to get the correct setting whenever I run a video without typing the above command every time ?

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • How to interpret iozone values

    - by Henno
    I ran a test to measure my I/O IOPS on Linux: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b /tmp/results.xls iozone claims that output is in operations per second yet the numbers are too big for that to be plausible. I'm observing some 320 CMDs/s maximum on vmware esx console (esxtop, then v). File size set to 4194304 KB Record Size 2 KB Record Size 4 KB Record Size 8 KB Record Size 16 KB Record Size 32 KB OPS Mode. Output is in operations per second. Command line used: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b tmpresults.xls Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4194304 2 19025 5580 27581 29848 284 198 415 1103217 1498 18541 4340 24245 25618 4194304 4 15650 21942 18962 21068 252 1198 193 976164 1677 22802 23093 21089 21232 4194304 8 11121 11638 10273 10165 247 1196 202 625020^C The test ran for 15 hours before I pressed ^C. Is that ordinary expectation for such command line (dedicated 4 drive RAID10 LUN, 10k RPM SAS drives in EMC CX300)?

    Read the article

  • pxe boot dos 7.x / 8.x on modern mainboard without floppy controller

    - by GitaarLAB
    How to pxe boot MS DOS 7.x / 8.x on a modern pc (mainboard without floppy controller) without using an external usb floppy drive? MS DOS 6.22 and earlier or other flavors pxe boot just fine on floppy-less hardware. But DOS 7.x and 8.x renders an error on boot: "Type the name of the Command Interpreter (e.g., C:\WINDOWS\COMMAND.COM) I read somewhere during research this was a rather unknown error, that started to become more common due to the advent of floppy-controller-less hardware. On some hardware (bios dependent) one could plug a usb-floppy-drive in the computer before booting (but that MIGHT also require it to be a "golden floppy drive" (as they where called back then). From a russian site (I read about a year ago and cannot find the hyperlink) MS-Dos versions 6.22 did some-kind of floppy-drive reset during initialization and since it couldn't connect to the floppy-host thus the error. How can I resolve this (without a physical external usb floppy)? Might there be some kind of virtual floppy-driver that could resolve this (for example to be loaded before the dos image loads)? Or could someone point me into the right direction (maybe even a hex-address and some further explanation or something)? I'm using syslinux by the way.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >