Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 909/1118 | < Previous Page | 905 906 907 908 909 910 911 912 913 914 915 916  | Next Page >

  • SSH from ubuntu server to Windows 2008 repeatedly asks for password

    - by jrizos
    I am trying to setup GIT using SSH mode. The central GIT repository is on a NAS device running Windows 2008 server and the user GIT repository is on ubuntu 12.04. When I try to SSH to the windows machine however I am not able to successfully get in. SSH keays are not setup but I think the problem is even before that since I cant get in just by providing the correct password. The output from the SSH command is below. Any help would be appreciated. dba@clpserv01:~$ ssh -v -l administrator clpnas OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to clpnas [***.***.***.***] port 22. debug1: Connection established. debug1: identity file /home/dba/.ssh/id_rsa type -1 debug1: identity file /home/dba/.ssh/id_rsa-cert type -1 debug1: identity file /home/dba/.ssh/id_dsa type -1 debug1: identity file /home/dba/.ssh/id_dsa-cert type -1 debug1: identity file /home/dba/.ssh/id_ecdsa type -1 debug1: identity file /home/dba/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze2 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA bd:37:d1:98:51:2a:d6:b5:f5:c7:98:d8:74:2c:4e:cd debug1: Host 'clpnas' is known and matches the RSA host key. debug1: Found key in /home/dba/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/dba/.ssh/id_rsa debug1: Trying private key: /home/dba/.ssh/id_dsa debug1: Trying private key: /home/dba/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,password,keyboard-interactive Password:

    Read the article

  • How to show what will be updated next pull?

    - by ???
    In SVN, doing svn update will show a list of full paths with a status prefix: $ svn update M foo/bar U another/bar Revision 123 I need to get this update list to do some post-process work. After I have transferred the SVN repository to Git, I can't find a way to get the update list: $ git pull Updating 9607ca4..61584c3 Fast-forward .gitignore | 1 + uni/.gitignore | 1 + uni/package/cooldeb/.gitignore | 1 + uni/package/cooldeb/Makefile.am | 2 +- uni/package/cooldeb/VERSION.av | 10 +- uni/package/cooldeb/cideb | 10 +- uni/package/cooldeb/cooldeb.sh | 2 +- uni/package/cooldeb/newdeb | 53 +++- ...update-and-deb-redist => update-and-deb-redist} | 5 +- uni/utils/2tree/{list2tree => 2tree} | 12 +- uni/utils/2tree/ChangeLog | 4 +- uni/utils/2tree/Makefile.am | 2 +- I can translate the Git pull status list to SVN's format: M .gitignore M uni/.gitignore M uni/package/cooldeb/.gitignore M uni/package/cooldeb/Makefile.am M uni/package/cooldeb/VERSION.av M uni/package/cooldeb/cideb M uni/package/cooldeb/cooldeb.sh M uni/package/cooldeb/newdeb M ...update-and-deb-redist => update-and-deb-redist} M uni/utils/2tree/{list2tree => 2tree} M uni/utils/2tree/ChangeLog M uni/utils/2tree/Makefile.am However, some entries having long path names are abbreviated, such as uni/package/cooldeb/update-and-deb-redist is abbreviated to ...update-and-deb-redist. I deem I can do with Git directly, maybe I can configure git pull's output in special format. Any idea?

    Read the article

  • Why is this iptables rule that does port forwarding not working?

    - by videoguy
    I have a server bound to localhost:7060. It is using ipv6 socket instead of ipv4. Below is netstat outout. # netstat -an Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 10.200.32.98:1720 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4122 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4123 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:4123 127.0.0.1:43051 ESTABLISHED tcp 0 0 10.200.32.98:5555 10.200.32.44:53162 ESTABLISHED tcp6 0 0 :::5060 :::* LISTEN tcp6 0 0 ::ffff:127.0.0.1:7060 :::* LISTEN tcp6 0 0 :::23 :::* LISTEN tcp6 0 0 ::ffff:10.200.32.98:23 ::ffff:10.200.32.142:43505 ESTABLISHED tcp6 0 0 ::ffff:127.0.0.1:43051 ::ffff:127.0.0.1:4123 ESTABLISHED tcp6 0 0 ::ffff:10.200.32.98:23 ::ffff:10.200.32.44:53195 ESTABLISHED udp6 0 0 :::5060 :::* CLOSE # I want to setup a port forwarding rule that accepts connections on port 24 (on all interfaces loopback as well as eth0) and forward the data to localhost:7060. This is how I am setting up the iptables rule: iptables -t nat -A PREROUTING -p tcp --dport 24 -j DNAT --to 127.0.0.1:7060** It is not working. When I telnet from different box, I see the following $telnet 10.200.32.98 24 Trying 10.200.32.98... If I change the server to bind to *:7060 and set the following rule, it seems to work fine. iptables -t nat -A PREROUTING -p tcp --dport 24 -j REDIRECT --to-port 7060 But that will make my server available on WAN interface which I don't like. I feel it had something to do with ipv6 socket (tcp6 line in netstat output). This whole thing is done on an Android device with custom built Android platform image. How do I get this working?

    Read the article

  • iptables rule on INPUT between 2 ethernet cards on the same host

    - by user1495181
    I have 2 eth cards on the same host. Both connected directly with LAN cable. I set eth0 with ip - 192.168.1.2 I set eth1 with ip - 192.168.1.1 I set this rule: iptables -A INPUT -p tcp -j NFQUEUE --queue-num 0 There are no other rules. (I ran iptables -X,-F) I send TCP syn packet ( with c++ program by using raw socket) from 192.168.1.2 to 192.168.1.1 In wireshark i see that the packet received on eth0, but the iptables rule (above) dosnt apply for this packet. when i sent the packet to remote host and apply this rule on the remote host than it work correct. So, i guess that this is due to the fact that both eth cards exists the same host. . I need to create iptables INPUT rule for local eth card (dest and src on the same machine ). I need it for simplify test. Did i guess the problem correct? is there a way to bypass this? Ps - connected them via switch didn't help. the rule wasn't applied. Run on Ubuntu. TCDUMP show the packet: 10:48:42.365002 IP 192.168.1.2.38550 > 192.168.1.1.34298: Flags [S], seq 0, win 5840, length 0 but logging of iptables like this, has nothing: iptables -A INPUT -p tcp -j LOG --log-prefix '*****************' iptables -A OUTPUT -p tcp -j LOG --log-prefix '#################'

    Read the article

  • Surround in Windows 7 with Soundmax

    - by Henri
    Hey guys, I have a asus m2n motherboard with a Soundmax 1988 chip on it. I want to have surround sound in windows 7, but i cant get it to work. I tried a lot of different drivers, including the latest beta for Win 7. The problem is that when I configure the speakers and i do a "Test" all speakers work. But whenever I want to play some video or mp3, only the front speakers work. In vista there was this option called "Surround Fill" that would fix this problem, however, I cant seem to find that in windows 7. (The enhancements tab is completely gone for the speakers, other output devices do have that tab). Anybody knows the fix? Edit: I got it now kind of working by using a trial version of SRS AudioSandbox. However, the quality is quite bad since the program tries to create a real 5.1 of stereo sound. I just want to have stereo over 4 speakers instead of fake 5.1 over 4 speakers.

    Read the article

  • Basic IPTables setup for OpenVPN/HTTP/HTTPS server

    - by Afronautica
    I'm trying to get a basic IPTables setup on my server which will allow HTTP/SSH access, as well as enable the use of the server as an OpenVPN tunnel. The following is my current rule setup - the problem is OpenVPN queries (port 1194) seemed to be getting dropped as a result of this ruleset. Pinging a website while logged into the VPN results in teh response: Request timeout for icmp_seq 1 92 bytes from 10.8.0.1: Destination Port Unreachable When I clear the IPTable rules pinging from the VPN works fine. Any ideas? iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE iptables -A INPUT -p tcp --dport 1194 -j ACCEPT iptables -A FORWARD -p tcp --dport 1194 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -j ACCEPT iptables -A INPUT -p tcp --dport 80 -j ACCEPT iptables -A INPUT -p tcp --dport 443 -j ACCEPT iptables -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT iptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT iptables -A INPUT -j REJECT iptables -A FORWARD -j REJECT

    Read the article

  • Correcting tree from messed up file tree in NTFS partition

    - by Fullmooninu
    It's a real messed situation, but I'm quite at the end of my options. It's my personal hardrive, so it's very important for me, and yes, I have no backup =( The short story: 1) I have two discs. One with Windows, and another where I had a bit of empty space at the front of the disk, so i could install Linux. The rest was occupied by a 1.8TB NTFS partition filled with data. 2) I installed Linux, and after a while realized there was not enough space for everything, so I tried using Gparted, and told it to re-size the NTFS partition, to a lesser size. 3) The system jammed. I had to reboot and broke the Resizing operation. Here's what I did to fix it: a) Rebooted into Linux Live, and used Testdisk,to deep analyze the disk, and recover the possible partitions. It found several versions of the NTFS partitions, probably made during the resizing. I told Testdisk to open every one of them, and only one could list its files. When trying to open the other options on Testdisk, it showed an error message. I assumed the one without errors, to be the correct one, and I told Testdisk to recover the partition, and write a new MBR. b) The partition had errors, and Linux has a NTFS fixing tool, used it, but the system still had errors. c) So I booted into windows and use chkdsk to correct all errors in the partition. d) Everything seems fine, but now, back in Windows, when I open one file, it opens another file, or part of another file. As in, some files took up the position of other files. What I think happened is that I recovered an old tree, and not the most current one. And that one just happened to be intact, while the most recent one was damaged. As such, the files that were moved during the failed resizing, were now, during the automatic correction, assumed wrongly to be in their correct places. So when I open a file, it tries to open another one. Radiohead - Creep.mp3 will open and it will actually be a bit from another song, or even code from a jpg. Some files seem to be all right, but others have seemed to have had their position taken by others. Anyone knows of something really powerful that can help me solve this?

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • Fedora 14 update probelem

    - by Marko
    How is everybody doin? :) Im having this problem with Fedora 14 update for last couple of weeks.. when I run yum update I get the following result: Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 kernel-uname-r = 2.6.32.10-90.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-1:195.36.15-1.fc12.1.i686 kernel-uname-r = 2.6.32.16-150.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-1:195.36.31-1.fc12.2.i686 kernel-uname-r = 2.6.32.21-168.fc12.i686.PAE is needed by (installed) kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-1:195.36.31-1.fc12.5.i686 Please report this error in http://yum.baseurl.org/report ** Found 9 pre-existing rpmdb problem(s), 'yum check' output follows: VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of libpython2.6.so.1.0 VirtualBox-3.2-3.2.10_66523_fedora13-1.i686 has missing requires of python(abi) = ('0', '2.6', None) 1:kmod-nvidia-2.6.32.10-90.fc12.i686.PAE-195.36.15-1.fc12.1.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.10', '90.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.16-150.fc12.i686.PAE-195.36.31-1.fc12.2.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.16', '150.fc12.i686.PAE') 1:kmod-nvidia-2.6.32.21-168.fc12.i686.PAE-195.36.31-1.fc12.5.i686 has missing requires of kernel-uname-r = ('0', '2.6.32.21', '168.fc12.i686.PAE') mysql-workbench-gpl-5.2.28-1fc13.i386 has missing requires of libpython2.6.so.1.0 pysvn-1.7.2-1.fc13.i686 has missing requires of python(abi) = ('0', '2.6', None) system-config-display-2.2-1.fc12.i686 has missing requires of libpython2.6.so.1.0 system-config-display-2.2-1.fc12.i686 has missing requires of python(abi) = ('0', '2.6', None) does anybody have a similar issue?

    Read the article

  • What are the possible problems, when wget returns code 500 but same request works in normal browsers?

    - by markus
    What should I be looking for, when wget returns 500 but the same URL works fine in my web browser? I don't see any access_log entries that seem to be related to the error. DEBUG output created by Wget 1.14 on linux-gnu. <SSL negotiation info stripped out> ---request begin--- GET /survey/de/tools/clear-caches/password/<some-token> HTTP/1.1 User-Agent: Wget/1.14 (linux-gnu) Accept: */* Host: testing.thesurveylab.net Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.0 500 Internal Server Error Date: Wed, 12 Dec 2012 14:53:07 GMT Server: Apache/2.2.3 (CentOS) Set-Cookie: blueprint2-staging=8jnbmkqapl30hjkgo0u6956pd1; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Strict-Transport-Security: max-age=8640000;includeSubdomains X-UA-Compatible: IE=Edge,chrome=1 Content-Length: 5 Connection: close Content-Type: text/html; charset=UTF-8 ---response end--- 500 Internal Server Error Stored cookie testing.thesurveylab.net -1 (ANY) / <session> <insecure> [expiry none] blueprint2-staging 8jnbmkqapl30hjkgo0u6956pd1 Closed 3/SSL 0x0000000001f33430 2012-12-12 15:53:07 ERROR 500: Internal Server Error.

    Read the article

  • iptables drops some packets on port 80 and i don't know the cause.

    - by Janning
    Hi, We are running a firewall with iptables on our Debian Lenny system. I show you only the relevant entries of our firewall. Chain INPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW Chain OUTPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Some packets get dropped each day with log messages like this: Feb 5 15:11:02 host1 kernel: [104332.409003] dropped IN= OUT=eth0 SRC= DST= LEN=1420 TOS=0x00 PREC=0x00 TTL=64 ID=18576 DF PROTO=TCP SPT=80 DPT=59327 WINDOW=54 RES=0x00 ACK URGP=0 for privacy reasons I replaced IP Addresses with and This is no reason for any concern, but I just want to understand what's happening. The web server tries to send a packet to the client, but the firewall somehow came to the conclusion that this packet is "UNRELATED" to any prior traffic. I have set a kernel parameter ip_conntrack_ma to a high enough value to be sure to get all connections tracked by iptables state module: sysctl -w net.ipv4.netfilter.ip_conntrack_max=524288 What's funny about that is I get one connection drop every 20 minutes: 06:34:54 droppedIN= 06:52:10 droppedIN= 07:10:48 droppedIN= 07:30:55 droppedIN= 07:51:29 droppedIN= 08:10:47 droppedIN= 08:31:00 droppedIN= 08:50:52 droppedIN= 09:10:50 droppedIN= 09:30:52 droppedIN= 09:50:49 droppedIN= 10:11:00 droppedIN= 10:30:50 droppedIN= 10:50:56 droppedIN= 11:10:53 droppedIN= 11:31:00 droppedIN= 11:50:49 droppedIN= 12:10:49 droppedIN= 12:30:50 droppedIN= 12:50:51 droppedIN= 13:10:49 droppedIN= 13:30:57 droppedIN= 13:51:01 droppedIN= 14:11:12 droppedIN= 14:31:32 droppedIN= 14:50:59 droppedIN= 15:11:02 droppedIN= That's from today, but on other days it looks like this, too (sometimes the rate varies). What might be the reason? Any help is greatly appreciated. kind regards Janning

    Read the article

  • fail2ban iptable rule wont block

    - by Termiux
    So I set up fail2ban on my Debian 7 server, still I've been getting hit a lot and I dont know why is not blocking properly. The regex works, it recognizes the attempts but it seems the iptables rules it insert wont work, this is how it look iptables ouput looks after fail2ban tries to block. Chain INPUT (policy ACCEPT) num target prot opt source destination 1 fail2ban-courierauth tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 2 fail2ban-couriersmtp tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 3 sshguard all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain fail2ban-courierauth (1 references) num target prot opt source destination 1 DROP all -- 216.x.y.z 0.0.0.0/0 2 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-courierimap (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-courierpop3 (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-couriersmtp (1 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-postfix (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-sasl (0 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 In the iptables above you can see the "Chain fail2ban-courierauth" rule that added the drop rule for the ip but Im still able to connect!! I can still connect to the server, why isn't it blocking?

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • What is the fall off of subsecond throughput on Ethernet Network Interfaces

    - by Kyle Brandt
    On a network interface, speeds are given in term of data over time, in particular, they are bits per second. However, in the uber-fast world of computing -- a second is kind of a really long time. So for example, given a linear falloff. A 1 GBit per second interface would do 500MBit per half second, 250Mbit per quarter second etc. I imagine at certain units of time, this is no longer linear. Perhaps this is set by ethernet frequencies, system clock speeds, interrupt timers etc. I am sure this varies depending on the system -- but does anyone have more information or whitepapers on this? One of the main reasons I am curious is to understand output drops on interfaces. Even if the speed per second is much lower than the interface can handle -- perhaps there are spikes that cause drops for only small numbers of milliseconds. Perhaps various coalescing would hide this effect -- or perhaps increase it on the receiving interface? Do queues make a difference here? Example: So given if this is linear down to the MS we would have 1Mbit/MS, and if Wireshark isn't distorting what I see, should I see drops when I have a spike beyond 1Mbit?

    Read the article

  • How to change password on RAR archive w/o modifying arch. files attributes (modified/created)?

    - by Larry78
    How do I change the password of an .RAR archive, without changing the date/time attributes of the files in the archive? Unfortunately you can't directly change the password of the archive with WinRAR, you have to extract the files, and then make a new archive with the new password. So the created/modified attributes of the files in the archive get changed. I know you can manually change the attributes of a file with available utilities - but there are hundreds of files in the archive, each with unique attributes, so it would take a very long time to "fix" each file before re-archiving it. I'm using WinRAR 3.51, the last free version. Windows XP Pro SP3. Update: I don't care if the output is a .RAR file or a ZIP file IZArc4.1 will convert the RAR to a ZIP, and it keeps the dates. The problem is it compresses the file - there isn't a "store" option, and setting the default to store in the main configuration doesn't effect conversions. The RAR contains uncompressed files. None of these other archiving programs will even do a conversion. A couple claim to, or try to, but the errors returned indicate a very lousy application. So far I've tried PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, and WinShrink (from downloads.cnet.com). WinRAR gives the error "skipping encryped archive" when I try the conversion. It asks for the password first, and I know it's right, as I opened the archive, and I can read/view all the files in it. It works on non-encrypted files.

    Read the article

  • I can't delete a directory inside a junctioned directory

    - by Fredy Muñoz
    So this is the deal. A couple of days ago I moved my profile folder C:\Documents and Settings\fmunoz to a different drive D:\fmunoz. Today, I created a directory in my desktop using the point-and-click method: Right-click on an empty space in the desktop Select New Select Folder Leave the default name New Folder and press Enter I tried to delete the folder using the point-and-click method: Right-click the New Folder directory Select Delete After five seconds, I got the following message: --------------------------- Error Deleting File or Folder --------------------------- Cannot delete New Folder: Access is denied. Make sure the disk is not full or write-protected and that the file is not currently in use. --------------------------- Initially I thought that there must be some sort of indexing services locking the directory so I got a list of open files using the TuneUp Process Manager tool but the New Folder directory wasn't there. I double-clicked My Computer, navigated to the desktop directory C:\Documents and Settings\fmunoz\Destkop, tried to delete the New Folder directory using the same point-and-click method described above and got exactly the same message at the same amount of time. In the same window, I navigated to the actual location of the desktop directory D:\fmunoz\Desktop, tried to delete the New Folder directory and this time it worked. I thought that this behavior was due to some special treatment that Windows gives to the desktop or the profile directories so I tried doing the same thing with a different set of directories: Created a folder D:\dummy Created a junction C:\dummy pointing to D:\dummy Created a New Folder directory in C:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I tried creating the folder in the actual directory rather than the junction directory: Created a New Folder directory in D:\dummy Tried to delete New Folder from C:\dummy. Didn't work. Tried to delete New Folder from D:\dummy. It worked. I also tried using the Delete button instead of using the Delete option of the context menu but it didn't work. When using the Shift+Delete sequence, it works. It also works by using the rd command in the console, but in both cases the deleted directory doesn't goes to the Recycle Bin, which is my intention when using the Delete context menu option or the Delete button.

    Read the article

  • Reverse SSH tunnel: how can I send my port number to the server?

    - by Tom
    I have two machines, Client and Server. Client (who is behind a corporate firewall) opens a reverse SSH tunnel to Server, which has a publicly-accessible IP address, using this command: ssh -nNT -R0:localhost:2222 [email protected] In OpenSSH 5.3+, the 0 occurring just after the -R means "pick an available port" rather than explicitly calling for one. The reason I'm doing this is because I don't want to pick a port that's already in use. In truth, there are actually many Clients out there that need to set up similar tunnels. The problem at this point is that the server does not know which Client is which. If we want to connect back to one of these Clients (via localhost) then how do we know which port refers to which client? I'm aware that ssh reports the port number to the command line when used in the above manner. However, I'd also like to use autossh to keep the sessions alive. autossh runs its child process via fork/exec, presumably, so that the output of the actual ssh command is lost in the ether. Furthermore, I can't think of any other way to get the remote port from Client. Thus, I'm wondering if there is a way to determine this port on Server. One idea I have is to somehow use /etc/sshrc, which is supposedly a script that runs for every connection. However, I don't know how one would get the pertinent information here (perhaps the PID of the particular sshd process handling that connection?) I'd love some pointers. Thanks!

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • Powershell Get-Process cannot connect to remote computer

    - by amandion
    I've been struggling with this for a few hours and can't figure this out. I have two Windows 7 computers. One is my workstation that is using Powershell to do administrative maintenance. The other is the machine I'd like to use Powershell remoting on to execute remote Powershell cmdlets on. On both computers, I've enabled Powershell remoting and added all computers to TrustedHosts with the * value. On the remote computer, I've started the Remote registry service and ensured that the DCOM, Winmgmt and the Winrm services are running. Firewall is disabled on remote machine too. The cmdlet I try to run is: Get-Process -ComputerName $name Where $name is the name of the remote machine. I keep getting an error saying that it could not connect to the remote PC. I've also tried using the IP and I get the same error. These PCs are not in a domain. I am able to do the following successfully: Invoke-Command {get-Process} -ComputerName $name -Credential $creds Where $name is the machine name and $creds is the user name and password for the remote computer's local Admin account. This gives me the same output I would expect. While this is an acceptable workaround, I am curious, why doesn't using get-process with remoting work as it should? I've seen a few articles on the web suggesting people have had success with it on its own. Each time I am using Powershell on my workstation with elevated privileges. Any ideas?

    Read the article

  • Gigabit network limited to 25MB/s by CPU. How to make it faster?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems. And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • Excel: How to treat multiple lines as one while sorting?

    - by crono
    I get a XLS-File as a database report. The File is in the following format: | Customer | Name | ... | Orders 1 | 6 | ... | ... | 1234 2 | | | | 4567 3 | | | | 8910 4 | 3 | ... | ... | 3210 5 | | | | 8765 6 | 1 | ... | ... | 1000 7 | | | | 1001 I need to sort this thing on a column which is only "filled" in the first line of a "record" (here: Line 1-3, 4+5, 6+7) like "Customer" in this example. Is there a way (without falling back to VBA) to keep the lines together which form a "record" while sorting on them. I know, this is abusing Excel but I have no other choise here. The expected output after sorting on "Customer" would be: | Customer | Name | ... | Orders 1 | 1 | ... | ... | 1000 2 | | | | 1001 3 | 3 | ... | ... | 3210 4 | | | | 8765 5 | 6 | ... | ... | 1234 6 | | | | 4567 7 | | | | 8910

    Read the article

< Previous Page | 905 906 907 908 909 910 911 912 913 914 915 916  | Next Page >