Search Results

Search found 6397 results on 256 pages for 'ssh agent'.

Page 193/256 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Why are my log in times taking so long in Linux?

    - by Jamie
    In recent weeks, login times on my Ubuntu server have started timing out; both through SSH and the local command line console. Examination of the /var/auth.log yields nothing interesting. How can I diagnose long log in times on my Ubuntu server? I should mention, also, that no updates have been performed since the problem has started, and that the /, /boot/ and /usr/ file systems are mounted as readonly. [Edit] This is a stand alone machine, so it doesn't authenticate with Active Directory, LDAP etc. Also, the login prompt is responsive, as is the password prompt. Upon typing the password then CR, I'll timeout. After four a five tries, I will be able to login, although I'm worried this will start taking longer.

    Read the article

  • Using 'Copy as cURL' from Chrome in windows command line

    - by user2029890
    So, Google Chrome as this great 'copy as cURL' option under 'Network' of the Chrome DevTools. Works great in command lines for linux but not in windows. Apparently it has something to do with the single quotes as the error I get is protocol 'http not supported In other words its reading that single quote. Is there a simple way to make this formatable for windows? I tried replacing all the single quotes with double quotes but then nothing happens at all. The command is: curl 'http://www.test.com/login/' -H 'Cookie: PHPSESSID=7dvb25maaaaaa9d7bbbbbc3f6' -H 'Origin: http://www.test.com' -H 'Accept-Encoding: gzip,deflate,sdch' -H 'Host: www.test.com' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36' -H 'Content-Type: application/x-www-form-urlencoded' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8' -H 'Cache-Control: max-age=0' -H 'Referer: http://www.test.com/login/' -H 'Connection: keep-alive' --data 'loc=&login=user%40test.com&password=password&submit1=Sign+In' --compressed Thank you

    Read the article

  • virtualbox and nginx server_name

    - by Ivan
    I'm trying to configure gitlab running in an Ubuntu 12.04 guest with Windows7 host. I can ssh the guest using port-forwarding and access the nginx server using port redirection (8888 in host is 80 in guest, so localhost:8888 in host gets to the nginx server in the guest), but the server_name in nginx configuration file is giving me trouble. What is the correct listen and server_name that nginx would accept? The guest has the NAT interface at 10.0.2.15 and Host-Only interface at 192.168.56.101, static. Thanks!

    Read the article

  • OS X server VPN local ip

    - by gbrandt
    Hi all, I have 10.6.2 server on the internet. I want to vpn into it to get access. I start VPN and it gives me an address in the range I have set 192.168.2.100-192.168.2.105. However the server itself does not have a local ip of 192.168.2.x so I cannot ping it or ssh into it or anything. The machine VPNing gets an ifconfig entry that looks like this: ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280 inet 192.168.2.100 --> 70.72.xxx.xxx netmask 0xffffff00 Where I think it should get: ppp0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1280 inet 192.168.2.100 --> 192.168.2.1 netmask 0xffffff00 I can't find anywhere to set the local vpn IP address. And I can't find a pptpd.conf file either. Any help is appreciated.

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • Solutions for exporting a remote desktop app (display and audio)

    - by Richard
    I'm looking for a solution that will allow me to export a desktop app running on a server to a client machine. The server is ideally Linux, the desktop is Windows (+Mac for icing on the cake). The export should be encrypted and I need to support multiple clients from one server. I only want to export an individual app, not a whole desktop, and ideally am looking for open source solutions. The obvious, cheapest, simplest choice is to use X tunnelled over ssh (e.g using Xming on the desktop) but X doesn't support audio. What are the alternatives? Or is there a way to support audio using X or in parallel to X? Thanks

    Read the article

  • Alternatives to Citrix GoToAssist ?

    - by Evan Carroll
    Citrix GoToAssist is a really nifty little web application for customer support that allows you to take control of someones OSX, or Windows machine. Essentially, it works likes this: You log in to your management console You get a code You give them a code, and a website (fastsupport.com) They go there and enter in the code They accept the browser applet which installs a program on their computer You have control of their desktop You can see their desktop, configure applications, etc. They can also see when you disconnect. It is really rather nifty, but it doesn't support Linux and it is rather expensive (660$ a year). Does anyone know of any alternatives to this? I'm looking for a solution as simple on the user as this one, that doesn't require firewall configuration or setting up ssh/vnc/rdesktop etc.

    Read the article

  • Generate Windows .lnk file with PHP

    - by Andrei
    Hello, I'm working on a project which involves an FTP server running ProFTPd and a PHP/MySQL backend that creates accounts for users. Upon the creation of accounts, users are sent e-mails with their account details and instructions for downloading FileZilla or CyberDuck, depending on their OS, detected via user-agent string. To make things easier for novices, I thought of having .lnk files generated for FileZilla with the account logins details as parameters, so they would just have to click on the .lnk files to open up the server. This is not crucial feature but more of a technical challenge. My questions are : is this even feasible ? are there any alternatives (eg. generating a .bat with a script pointing to the Filezilla executable ?) are there any issues, perhaps with relative / absolute paths pointing to the executable ? to go even further, what would be the simplest way of providing users with software with FTP access on a single account / single server (web interface is not an option).

    Read the article

  • Why do I have no TTY on a basic Ubuntu 9.10 server install?

    - by pr1001
    I have reinstalled Ubuntu 9.10 Server several times on a bog standard 1RU server and each time I finish the install and reboot I see GRUB run and am then presented with a black screen. The machine is running just fine, as I am able to SSH in, but I can't see anything on the attached monitor. I have a simple LCD screen connected via VGA and a signal is apparently being output to it, as it doesn't go asleep. Looking at /var/log/syslog I see: Mar 24 14:57:44 bridge5 rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] However, I later see: Mar 24 14:57:44 bridge5 kernel: [ 0.001368] console [tty0] enabled Any thoughts? Thanks!

    Read the article

  • Determine which version of linux/unix/darwin I have

    - by John
    I have root ssh/terminal access to a linux server. How do I determine which version of centos I have? Some people suggested I run the command cat /etc/redhat-release but I got an error saying file not found. In fact, i'm not entirely sure i'm even using CentOS. That's what some suggested it might be. Here's a list of commands I tried that gave me no file or directory error: cat /etc/*release* cat /etc/*version* cat /proc/*version* cat /proc/*release* Here's a list of linux commands that do not exist: lsb_release: command not found wget: command not found yum: command not found

    Read the article

  • CentOS 6 - iptables preventing web access via port 80

    - by bsod99
    I'm setting up a new web server with CentOS 6.2 and am not able to connect via the web. Everything looks set up correctly in httpd.conf and Apache is running, so I'm assuming it's an iptables issue. Is there anything in the following which could be causing the issue? # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT)

    Read the article

  • apache url / filename with special characters

    - by Mario Delgado
    I have this url: http://domain.com/wp-content/uploads/2012/10/Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I ftp/ssh or just browse to that folder (apache index feature), I see the file Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I click on the link from the apache index, I can see the file, however, if I copy the URL and try to browse to it directly, I get the error: The requested URL /wp-content/uploads/2012/10/Hvilke-vilkÃ¥r-følger-med-nÃ¥r-du-bestiller-nyt-bredbÃ¥nd.png was not found on this server. Also my error log says: File does not exist: /wp-content/uploads/2012/10/Hvilke-vilk\xc3\xa5r-f\xc3\xb8lger-med-n\xc3\xa5r-du-bestiller-nyt-bredb\xc3\xa5nd.png

    Read the article

  • Xterm is not completely erasing field lines

    - by user26367
    We have a SSH tunnel to a remote unix box from Windows clients using Cygwin. It launches a terminal program from the unix box locally on the Windows box for data input. The xterm window is launched as follows xterm -fn 10x20 -bg DodgerBlue4 -fg white -cr white -ls -geometry 90x30 -e program When a screen goes from read only mode to edit mode, the edit fields have ____. When going back to read only mode, a single pixel artifact is left behind for each field. *readonly* User: *edit* User: ___________ *after edit exit* User: . <- this dot is left behind Any idea what we need to change to fix this?

    Read the article

  • scp -q isn't quiet between different hosts

    - by pythonic metaphor
    So scp -q file host:file and scp -q host:file file are both quiet, i.e. don't give the progress meter. But when I run scp -q host1:file host2:file, I still get the progress meter as well as a Connection to host1 closed. message. The progress meter can be gotten rid of by redirected stdout to /dev/null (although I'd rather not have to), but the connection closed messages comes on stderr, which I definitely want to keep in case there's a real error. How can I make scp quiet? Do I have to run ssh host1 "scp -q file host2:file"?

    Read the article

  • Verizon Fivespot firewall

    - by Patrick
    I have a Verizon Fivespot Wi-Fi router and am having issues connecting to the computer that uses it to get on the internet. I am able to connect to the Fivespot admin pages remotely and I am able to connect to the internet from the computer behind the Fivespot. There are two sections pertinent to this issue, Port Filtering And, Port Forwarding I've tried each individually and both together but cannot access anything through the router except for the admin page. I am trying to connect through SSH to an Ubuntu 10.04 box over wifi. I have called Verizon Tech Support but they were unhelpful, the person essentially read what it says on each screen without any elaboration. Any help is greatly appreciated!

    Read the article

  • apache httpd cannot browse through browser

    - by nuttynibbles
    i've setup apache and php on a virtual machine. everything works fine in the virtual machine. im able to execute php files and run up phpmyadmin connecting to mysql. on my host machine, im able ping and ssh into the remote machines. however, im unable to browse the php files on the host browser using the ip address. in my httpd.conf, im listening to port 80. i enabled the ServerName 192.168.75.102:80 am i missing some settings? port settings maybe?

    Read the article

  • using git on DOS command line asks for password - but not when using TortoiseGit or gitBash

    - by Sandy
    I would like to use the DOS command line to enter the command: git clone "git_path.git" myDir It asks me to enter a password which I would like to avoid. I usually use TortoiseGit to do all git related operations. I would like to setup cruisecontrol using ant with a custom git task. Therefore I need to perform git clone on the command line in Windows 7. But it only works using git bash and not DOS. According to other forum entries, I tried to convert the key with puttyGen and put the file id_rsa in c:/Users/myName/.ssh I also added an authorized_keys file but it still asks for a password. Any ideas? Thanks

    Read the article

  • Intermittent apt-get 'no installation candidate' error on fabric deploy

    - by jberryman
    I'm experiencing a strange issue with a fabric script I'm using to bootstrap a server on EC2. I launch a stock Ubuntu 12.04 AMI, wait for it to start, then proceed with: with settings(host_string="ubuntu@%s" % i.dns_name, connection_attempts=30): sudo('apt-get -qy update') sudo('apt-get -qy install --no-install-recommends mdadm') # don't install postfix #etc... The apt-get update appears to run fine and gives no errors, however (2/3 of the time or so) installing mdadm throws a "no installation candidate" error. When I ssh into the server and run apt-get install mdadm I get the same error. Running apt-get update by hand, then the package installs fine. Any ideas on what might be happening, or ideas for debugging?

    Read the article

  • file copy error from system to cifs mount

    - by dwpriest
    When coping a file greater than 64kB from an Ubuntu server to a CIFS mounted windows share, most of the data is copied, but it seems the last chunk doesn't get copied. The size doesn't match, and the md5 check sums don't match. I have plenty of file space, but then I use cp, I get the following... cp: closing `cloudBackup/asdf.txt': No space left on device Using rsync, I get the following... rsync: close failed on "/home/fluffy/cloudBackup/.asdf.txt.qrBWe6": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(752) [receiver=3.0.8] rsync: connection unexpectedly closed (29 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.8] I have full read/write permissions on the mounted share. I can copy via SSH just fine. Any ideas? Thank you

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • How to elegantly selectively exclude FreeBSD network traffic from OpenVPN interface by port

    - by Polygonica
    inexperienced sysadmin here. I'm planning on running a net daemon inside a FreeBSD jail through OpenVPN, but want to be able to SSH directly into the jail and use the daemon's web interface daemon without going through the VPN. As I understand it, an OpenVPN tunnel is normally set up as a default virtual internet interface, and so incoming traffic will go out on the OpenVPN interface by default (which is problematic, as this incurs latency). I thought "well, obviously, since all of this traffic is leaving on a handful of ports, I'll just redirect those to the non-VPN gateway." I've tried to look for solutions, but almost all of them involve iptables instead of ipfw (which is default for FreeBSD) and solve slightly different problems. And alternate solutions like using multiple default routes to ensure that incoming traffic on any interface is always sent out on the same interface seem far-reaching and require deep knowledge of all tools involved. Is there an elegant way of ensuring that traffic leaving on specific ports exits on a specified non-default interface using ipfw?

    Read the article

  • Config deployment on multiple servers.

    - by user66601
    I have multiple servers in WEB cluster (identical configuration for all of them, despite the IP) How do you deploy changes in configs on multile servers? I make the new config, then create config per every server (placing correct IP), and next: upload them on every server, replacing old ones (rsync over ssh) set on every server a job which reloads webserver at the same time (servers use ntp). - this done by issuing commands by script (to save time for logging in) before adding a job for server reload - there's checksum test of the config on the server) - an a notification in case of fail How do you see such method? What should be the "professional way :) ? (I don't say my way doesn't work... it works and saves my time not used for logging on every webserver.) Regards,

    Read the article

  • something like persistent X forwarding?

    - by Arthur Ulfeldt
    I'm having trouble with the title on this one, please edit. When users connect to a VM with VNC/NX/RDP/other-tla they get a persistent desktop in a window . When they connect using ssh -X forwarding they get a local window managed by the local windo-manager that is not persistent. 1: is there a way to run a program on the VM and have it managed locally AND have it persistent? 2: can the client be on windows or OS-X? ps: in this case the vm's are running Ubuntu

    Read the article

  • What is a good php 5.3.x shared hosting company?

    - by Abba Bryant
    I am looking for the best shared host - features-wise, not price - for hosting CakePHP and Lithium applications. I would like to be able to use MongoDB / MySQL as well as have access to some of the more common PHP extensions like MCrypt, etc. I currently use dreamhost with a custom PHP 5.3.x build on my sandbox domain - Please do not suggest this as a solution. I want to move away from managing my own PHP build if possible. I need ssh access but email support isn't as big of an issue.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >