Search Results

Search found 5845 results on 234 pages for 'commit protocol'.

Page 174/234 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • Setup Version Control on Dreamweaver

    - by John Isaacks
    I have a win computer on the Network called WIN2K8FS1 I have TortoiseSVN on a win computer and when I go to checkout a repository with Tortoise it asks me for the URL of the repository. I put in: file://WIN2K8FS1/Media/SVN_repo And it creates the working copy. I am trying to setup Dreamweaver CS5 to work with subversion. I create a new site and I go to the Version Control tab and it asks for a lot if info. First is Access. I choose Subversion since that is the only option Second is Protocol. Not sure which I need so I go with HTTP? Third is Server Address. I am assuming this is the name of the computer with the repository so I put in \\WIN2K8FS1\ Fourth is Repository Path. I put in /Media/SVN_repo Fifth is Port which I leave default to 80 Then it asks for user name and password. I never set one up for anything so I put in my domain username and password. I click test and it tells me: Server and project are not accessible! I am not sure what I am doing wrong. I am not the server admin but I did create the repository and have access to it via Tortoise. So I am not sure what I am doing wrong in Dreamweaver.

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • Test A SSH Connection from Windows commandline

    - by IguanaMinstrel
    I am looking for a way to test if a SSH server is available from a Windows host. I found this one-liner, but it requires the a Unix/Linux host: ssh -q -o "BatchMode=yes" user@host "echo 2>&1" && echo "UP" || echo "DOWN" Telnet'ing to port 22 works, but that's not really scriptable. I have also played around with Plink, but I haven't found a way to get the functionality of the one-liner above. Does anyone know Plink enough to make this work? Are there any other windows based tools that would work? Please note that the SSH servers in question are behind a corporate firewall and are NOT internet accessible. Arrrg. Figured it out: C:\>plink -batch -v user@host Looking up host "host" Connecting to 10.10.10.10 port 22 We claim version: SSH-2.0-PuTTY_Release_0.62 Server version: SSH-2.0-OpenSSH_4.7p1-hpn12v17_q1.217 Using SSH protocol version 2 Server supports delayed compression; will try this later Doing Diffie-Hellman group exchange Doing Diffie-Hellman key exchange with hash SHA-256 Host key fingerprint is: ssh-rsa 1024 aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa Initialised AES-256 SDCTR client->server encryption Initialised HMAC-SHA1 client->server MAC algorithm Initialised AES-256 SDCTR server->client encryption Initialised HMAC-SHA1 server->client MAC algorithm Using username "user". Using SSPI from SECUR32.DLL Attempting GSSAPI authentication GSSAPI authentication initialised GSSAPI authentication initialised GSSAPI authentication loop finished OK Attempting keyboard-interactive authentication Disconnected: Unable to authenticate C:\>

    Read the article

  • Is wiper.sh working?

    - by Aleksander Blomskøld
    I'm setting up a server running Ubuntu Precise, and I'm trying to verify if SSD TRIM is working. fstrim is failing: ~ sudo fstrim -v / fstrim: /: FITRIM ioctl failed: Operation not supported So I tried wiper.sh in hdparm: wiper-3.5 sudo ./wiper.sh --verbose --commit /dev/sda1 wiper.sh: Linux SATA SSD TRIM utility, version 3.5, by Mark Lord. rootdev=/dev/sda1 fsmode2: fsmode=read-write /: fstype=ext4 freesize = 169502088 KB, reserved = 1695020 KB Preparing for online TRIM of free space on /dev/sda1 (ext4 mounted read-write at /). This operation could silently destroy your data. Are you sure (y/N)? y Creating temporary file (167807068 KB).. Syncing disks.. Beginning TRIM operations.. get_trimlist=/sbin/hdparm --fibmap WIPER_TMPFILE.11503 /dev/sda: trimming 3211263 sectors from 64 ranges succeeded trimming 3571713 sectors from 64 ranges succeeded trimming 3915776 sectors from 64 ranges succeeded (...) trimming 3657913 sectors from 60 ranges succeeded Removing temporary file.. Syncing disks.. Done. It seems to be working, but I'm wondering if it really is. Are there any cases where wiper.sh should work when fstrim isn't? Is there any way I can check if the TRIMing actually has succeeded (other than trusting the wiper.sh-log)?

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything.

    Read the article

  • X.org - mouse gets stuck on press

    - by grawity
    I'm using Arch Linux. Very often, when I click on something, the OS thing sees the mouse press but not the release. If it was a link or file I clicked, moving the cursor would drag it too. Hammering the same mouse button again gives no effect. Usually, if I tap the touchpad (ALPS), the system finally sees both press and release of that, and I can continue working. (This might be because it uses a different driver - synaptics instead of evdev.) As you can imagine, this is quite annoying even for someone who spends 70% of his life in front of a terminal app. This is not a mouse issue - I'm on a laptop, and this affects both the Trackpoint thing and an external USB mouse. This is not a DE or window manager issue - I have used GNOME (with Metacity, Compiz and Xfwm4), Xfce (with Metacity and Xfwm4), mwm, twm, awesome, and wmii. Doesn't seem to be a hardware thing - after rebooting into Windows XP, everything works fine. hal is used for the auto-configuration of devices (as I have to disconnect the USB mouse often), so Xorg.conf really has nothing of relevance. Xorg -version shows: X.Org X Server 1.6.3.901 (1.6.4 RC 1) Release Date: 2009-8-25 X Protocol Version 11, Revision 0 If it changes anything, the laptop is stone-age Dell Latitude C840. I kinda suspect either hal or the evdev thing to cause it, but I really have no ideas on what to check further. In other words, HALP!#$ This thing is driving me nuts.

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • Can I disable this Windows (XP) Security Warning?

    - by FumbleFingers
    I recently reformatted my hard drive and reinstalled Windows XP (I know I'll have to take the plunge and commit to Win8 "real soon, now", but I'm just not quite ready for the upheaval yet! :) I used to use WinRar (and later, when I got fed up with the "nag" messages, 7-Zip), but I haven't installed either of them in my new configuration, so I must be using the built-in XP facility when I open *.zip files. For years, I've been opening downloaded *.zip archives, and using "drag & drop" to copy to a File Explorer window open on the folder where I want the files to end up (usually, My Documents\Downloads). But now I find that when I "drop" the file(s), I get a pop-up Windows Security Warning saying Are you sure you want to copy or move files to this folder? You should only move or copy files from locations that you trust Can anyone explain why I'm getting this message, and is there any (reasonably easy, please! :) way to suppress it? Since I've already put the *.zip file on my computer, it seems a bit late to ask if I trust it. (Thus far, the files in question have always been plain text, so it's not a matter of dodgy programs, etc.) Apologies for the low quality image - I don't have the appropriate tools or knowledge to do any better, and it doesn't help that my "PrtScr" screen capture has included what would have been on my second monitor (TV) if it had been turned on. If you can't read it, trust me - I have copied the text verbatim.

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • transparent git-svn gateway

    - by azatoth
    Currently we have an subversion repository with the following layout: /trunc /group1 /proj1 /proj2 group2 /proj3 /etc.. /tags /group1 /proj1 /proj2 group2 /proj3 /etc.. /branch /anything temporary I believe this is an rather bad layout, but at the moment it's difficult to change it fully. Personally I dislike subversion, due mostly the long time it takes to check history, and also that branching and merging are cumbersome etc. so I really want to use git instead. Sadly we cant just switch to git as the mental capacity for some might be to overwhelming, so I was looking into git-svn to see if I could practically use that to solve the issue. Sadly that directly ends up in a bad situation as I want to break down each project into one git repo, and I don't want to have to recreate the git-svn checkout on each computer I work on. so I though perhaps there is an possibility to create some sort of transparent git ?? svn proxy/gateway, so that an push to that repo "commits" to the svn repo, and an commit to the svn repo updates the git repo. Google hasn't been my friend, have only found generic usage help to use git-svn, so I ask you if you have some good ideas to accomplish this.

    Read the article

  • how would it be possible to discover a cable modem's MAC remotely?

    - by amateurenthusiast
    i was reading the back archives of a canadian privacy law blog, and he linked to a judicial decision. apparently as part of an investigation in which were used yahoo chat and google's old 'hello' image trading program the officer was able to determine a suspect's modem's MAC address: In order to determine who STEPHTOSH was, the officer did a trace on a programme called WHO IS in an effort to learn from where STEPHTOSH was coming. WHO IS is a command program available to the public. The officer was able to ascertain that the person using the name STEPHTOSH was a Rogers Internet customer. The officer was able to obtain the Internet Protocol address, also known as the I.P. There is only one location for an I.P., which is unique to that subscriber. By use of the website known as DNS STUFF.com, one is able to find with which company this I.P. is registered. It was ascertained that the I.P. address used by STEPHTOSH was registered to Rogers Cable, from the Toronto area. The officer also learned the Cable Modem MAC address used by STEPHTOSH. This was all the information the officer was able to amass. now it was my understanding that the MAC address of any given device can only be accessed if you're only one 'hop' away on the Internet. the suspect in question was in Markham and the officer part of the Toronto Police, so it's conceivable that they both might have used Rogers internet. but would that still put them only one 'hop' away from each other? i thought the first hop after the modem was usually the ISP? and if he'd used a netBIOS query against this guy's machine it would return the ethernet card's MAC, not the modem's. so is this guy on the same rogers subnet as the suspect's cable modem, is that functionality part of google's Hello (i could only think that it would be possible if Hello operated as a virtual LAN or something), does the officer have remote access to the arp caches of the routers at Rogers or is he just full of crap and lying to make his case stronger?

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    Hi All, I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS ******* Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • iptables-restore: line 1 failed

    - by Doug
    Hello, I am new to servers, and I was following this guide and it failed on the first command instructed. Could anyone give me a hand? http://wiki.debian.org/iptables ~ZORO~:/etc# iptables-restore < /etc/iptables.test.rules iptables-restore: line 1 failed Edit: iptables.test.rules ~ZORO~:/etc# cat /etc/iptables.test.rules *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You could modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections for script kiddies # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT # Now you should read up on iptables rules and consider whether ssh access # for everyone is really desired. Most likely you will only allow access from certain IPs. # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls (access via 'dmesg' command) -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy: -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Magnetic Stripe Reader over Terminal Server has random Uppercase/Lowercase nonsense

    - by Peter Turner
    The Magnetic Stripe Reader that I'm using and testing is just supposed to be sending keystrokes. Unfortunately, it seems to randomly be sending upper case and lower case keystrokes, sometimes substituting % for 5 and ^ for 6 and vice versa. (If you've ever programmed for a magnetic strip reader, you know that's not a good thing.) Is there something in the RDP protocol that causes this? I've got kind of a convoluted system, running XP inside virtualbox on Fedora 11 RDP'ed into a win2k3 server. It works on the XP VM and it doesn't work on the RDP'ed one. What's weirder, is that I'm not even emulating the USB drivers for my Mag Card Reader. Linux is sending keystrokes straight in to windows, and MSTSC on windows XP is sending crap to the Win2k3 server. I'm 99% certain this isn't a problem with the card reader, it has nothing to do with my programming either. (I get the same junk coming into notepad that I get coming into our software [that's why I didn't ask on SO]). And, it works with rdesktop programs other than MSTSC.exe! Needless to say, I'm in need of some RDP Guruship.

    Read the article

  • VPN - Accessing computer outside of network. Only works one way

    - by Dan
    I could use some help here. My ideal goal is to create a VPN for 2 macs that are in different locations so that they can share each others screens and share files. I basically want to do what Logmein's Hamachi does, but without the 5 user limitation. I have set up the VPN on my Synology NAS at my house using the PPTP protocol. I could also use OpenVPN. The good news is that I can use a laptop outside of my home network to access any computer on my network at my house. The bad news is that I can not do the reverse. I want to use a computer in my home network (same network as the VPN server) to access a computer outside of my network (which is connected via VPN successfully). My internal IP is 192.168.1.xxx PPTP VPN assigns my laptop that is outside of my network with 192.168.5.xxx, but when I try to access it remotely either with afp://192.168.5.xxx or vnc://192.168.5.xxx I can't connect using either. Is this something that I should be able to do or is VPN only one way? I've also tried openvpn with the same results. Thanks for any help! -Dan

    Read the article

  • How does one make sure or even guarantee server time are sync correctly between dozens of servers across multiple datacenter on different location?

    - by forestclown
    Currently our web applications contain a logic to check if the data sent to the web server is expired or not by comparing the timestamp of the data with the date/time of the server. Everything goes will, until some dude from data center accidentally modify one of the web server date/time and causes some disruptions in our web services. My managers are of course not happy with this, and said we shouldn't use timestamp to check expiry in the first place...anyway.... Network Time Protocol is implemented, because of data centers are spread across different continents so we have one NTP server in each data center. The servers within the data center will have cron jobs to check against the time with their NTP server from the same data center. If time is out of sync it will auto update the server date/time. But then with our managers not happy with it, and think it could still easily causes the same problem. e.g. what if someone accidentally modify the NTP date/time? what if all the NTP servers are out of sync with each other? which NTP servers we can really trust? and blah blah.. So my questions are: What are the current practice to sync date/time between servers across multiple data centers or locations? How does one manages time stamp between web apps? e.g. Server A send data (contain timestamp of Server A) to Server B (compare timestamp between Server B and the timestamp from the data to see if it has expired or not. This is to avoid HTTP replay) Should we really not use timestamp check? Thanks & Best Regards

    Read the article

  • gitweb- fatal: not a git repository

    - by Robert Mason
    So I have set up a simple server running debian stable (squeeze), and have configured git. Using gitolite, I have all functionality (at least the basic clone/push/pull/commit) working. Installation of gitweb went without any issues. However, when I access gitweb, I get a gitweb screen without any repos listed. # tail -n 1 /var/log/apache2/error.log [DATE] [error] [client IP_ADDRESS] fatal: Not a git repository: '/var/lib/gitolite/repositories/testrepo.git' # cd /var/lib/gitolite/repositories/testrepo.git # ls branches config HEAD hooks info objects refs Here is what I see in /var/lib/gitolite/projects.list: testrepo.git And in /etc/gitweb.conf: # path to git projects (<project>.git) $projectroot = "/var/lib/gitolite/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = "/var/lib/gitolite/projects.list"; # stylesheet to use $stylesheet = "gitweb.css"; # javascript code for gitweb $javascript = "gitweb.js"; # logo to use $logo = "git-logo.png"; # the 'favicon' $favicon = "git-favicon.png"; What is missing?

    Read the article

  • Possible DNS Injection and/or SSL hijack?

    - by Anthony
    So if I go to my site without indicating the protocol, I'm taken to: http://example.org/test.php But if I go directly to: https://example.org/test.php I get a 404 back. If I go to just: https://example.org I get a totally different site (a page about martial arts). I went to the site via https not very long ago (maybe a week?) and it was fine. This is a shared server, as I understand it, and I do not have shell access, so I'm limited to the site's CPanel to do any further investigations. But when I go to: example.org:2083 I'm taken to https://example.org:2083, which, if someone has taken over the SSL port, could mean they have taken over the 2083 part as well (at least in my paranoid mind). I'm made more nervous by the fact that the cpanel login page at the above address looks very new (better, really) compared to the last time I went to it over the weekend. It's possible that wires got crossed somewhere after a system update, but I don't want to put in my name username and password in case it's a phishing attempt. Is there any way to know for sure without shell access to know for sure if someone has taken over? If I look up the IP address for the host name, the IP address matches what I have on a phpinfo page I can get to over http. If I go to the IP address directly on port 2083, I get the same login mentioned above (new and and suspiciously nice). But the SSL cert shows as good when I go this route. So if that's the case (I know the IP is right, the cert checks out, and there isn't any DNS involved), is that enough to feel safe at that point of entry? Finally, if I can safely log in via the IP, does anyone have any advice on where to check first on CPanel for why the SSL port is forwarding to a site on karate? Thanks.

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • Traffic shaping on Linux with HTB: weird results

    - by DADGAD
    I'm trying to have some simple bandwidth throttling set up on a Linux server and I'm running into what seems to be very weird stuff despite a seemingly trivial config. I want to shape traffic coming to a specific client IP (10.41.240.240) to a hard maximum of 75Kbit/s. Here's how I set up the shaping: # tc qdisc add dev eth1 root handle 1: htb default 1 r2q 1 # tc class add dev eth1 parent 1: classid 1:1 htb rate 75Kbit # tc class add dev eth1 parent 1:1 classid 1:10 htb rate 75kbit # tc filter add dev eth1 parent 1:0 protocol ip prio 1 u32 match ip dst 10.41.240.240 flowid 1:10 To test, I start a file download over HTTP from the said client machine and measure the resulting speed by looking at Kb/s in Firefox. Now, the behaviour is rather puzzling: the DL starts at about 10Kbyte/s and proceeds to pick up speed until it stabilizes at about 75Kbytes/s (Kilobytes, not Kilobits as configured!). Then, If I start several parallel downloads of that very same file, each download stabilizes at about 45Kbytes/s; the combined speed of those downloads thus greatly exceeds the configured maximum. Here's what I get when probing tc for debug info [root@kup-gw-02 /]# tc -s qdisc show dev eth1 qdisc htb 1: r2q 1 default 1 direct_packets_stat 1 Sent 17475717 bytes 1334 pkt (dropped 0, overlimits 2782 requeues 0) rate 0bit 0pps backlog 0b 12p requeues 0 [root@kup-gw-02 /]# tc -s class show dev eth1 class htb 1:1 root rate 75000bit ceil 75000bit burst 1608b cburst 1608b Sent 14369397 bytes 1124 pkt (dropped 0, overlimits 0 requeues 0) rate 577896bit 5pps backlog 0b 0p requeues 0 lended: 1 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 class htb 1:10 parent 1:1 prio 0 **rate 75000bit ceil 75000bit** burst 1608b cburst 1608b Sent 14529077 bytes 1134 pkt (dropped 0, overlimits 0 requeues 0) **rate 589888bit** 5pps backlog 0b 11p requeues 0 lended: 1123 borrowed: 0 giants: 1938 tokens: -205561 ctokens: -205561 What I can't for the life of me understand is this: how come I get a "rate 589888bit 5pps" with a config of "rate 75000bit ceil 75000bit"? Why does the effective rate get so much higher than the configured rate? What am I doing wrong? Why is it behaving the way it is? Please help, I'm stumped. Thanks guys.

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • VPN Client solution

    - by realtek
    I have several VPN's that I need to establish on a daily basis but from multiple workstations. What I would like to do it have either a server or vpn router that can perform this connection itself and that I can then route traffic through this device or server depending on the subnet I am trying to reach. The issue is that I only use VPN Clients to connect, so I am basically trying to achieve almost a site to site VPN but by using basically a VPN Client type connection from my network. The main VPN Client I use is the Sonicwall Global VPN Client where I initially use a Preshared Key and then it always prompts me for a username and password (not RSA key). My question is, is there any type of linux distro or even a hardware vpn router that can do this and connect to a Sonicwall device as if it were a client? I have tried pfSense which is very good but it fails to connect, probably due to a mismatch of settings. I have tried many others. Even dd-wrt on my router but it does not support whatever protocol Sonicwall uses. (I thought L2TP/IPSec) but it appears it may not be that. Any advice would be great! The other other thing I have thought of that I have not tried yet is Windows Server Routing and Remote Access but I have a feeling that won't work either. Thanks

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >