Search Results

Search found 17749 results on 710 pages for 'connection pool'.

Page 484/710 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • trap "" HUP v.s Nohup ? How can I run a portion of shell script in nohub mode?

    - by Alex
    I want to run a shell script over the weekend, but I wanna make sure if the terminal loses the connection, my script won't be terminated. I use nohup for the whole script invokation, but I also want to execute some portion of my shell script in a way that if someone closes my terminal, my script runs on the background still. Here is a simple example : #!/bin/bash echo "Start the trap" trap " " HUP echo "Sleeping for 60 Seconds" sleep 60 echo "I just woke up!" Please suggest what I should do ? The trap " " HUP seems like not working when I close my terminal tab.

    Read the article

  • Windows 2008 Routing and Remote access server - access to the internet

    - by Ian
    I have a windows 2008 r2 remote access server set up and running. The remote access works fine. My problem is that the remote access server itself doesn't have access to the internet. The box has two interfaces, an internal and an external. Inbound connections come in on the external interface and RRAS responds. All wall and nice. I want to be able to use windows update, browse, etc from this box but can't as the outbound traffic just gets blocked. I've tried going into the RRAS mmc tool and opening the interface properties, under which there are two buttons for inbound and outbound filters. There I tried adding ports 80 and 443, but this doesn't work completely. I can see the connection initiating (Syn goes out) but the session never establishes itself. Anyone done this or got any suggestions?

    Read the article

  • Nomachine 4 for X forwarding

    - by Yair
    I have been using nomachine nx client to connect from my mac to an ubuntu server for a while now and it has been a great experience. The most useful feature for me was the option to open up just one application on the remote machine, instead of a full remote desktop connection. I used to to open a terminal on the remote machine. Basically it was a much faster, much better replacement for ssh -X. All was great until I upgraded to the new version - nomachine 4. In this version I can not find that option. I have to run a full remote desktop session, which slows things down and is also much less convenient for my work. Was this option removed from the client? Or is it hiding somewhere in there and I just can't find it?

    Read the article

  • Accessing localhost via IIS 7.5 on Windows 7 very slow

    - by Ian Devlin
    (I've asked this over on stackoverflow already, but thought I'd ask here as well) I'm currently running an ASP.NET application on IIS 7.5 on Windows 7. When I access this application on Internet Explorer (either 6, 7 or 8) it is incredible slow and often fails to load at all. There are messages at the bottom saying: Waiting for http://localhost/....... or sometimes waiting for about:blank (I've read that this can be a virus, but I've run all the usual checks and it's not). constantly, but it returns with the usual: "Internet Explorer cannot display the webpage" I've also tried this by using 127.0.0.1 and the machine name, with the same results. I've tried the same application on the latest Firefox, Safari, Chrome and Opera and they all work fine. I've also installed the same application on a Windows Server 2003 machine, and it all works fine via Internet Explorer. I've also turned off the IPv6 setting on the LAN connection. Soes anyone have any ideas why this doesn't work with Internet Explorer and yet does with other browsers?

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • virtualbox ftp hangs on list command

    - by Tiddo
    Hi all, I have virtual box installed on a windows 7 64-bit computer, with Cent OS 5.5 as guest os. I want to be able to use ftp between those. I've installed vsftpd on the guest os, and the guest os uses a nat connection with the host os for internet. So far, I am able to connect to the guest os using ftp (in filezilla), but after the list command is executed, nothing happens, until the command is timed out. This happens in both active and passive mode. I do have set a pasv_min/max_port in the vsftpd.conf file, listing is enabled, and the ports are redirected in virtualbox. Also the ftp_data_port is set to 20. I also tried setting the pasv_address, but I had to set it to 127.0.0.1, but than filezilla gives me this: Command: PASV Response: 500 OOPS: bad family Command: PORT 127,0,0,1,139,204 Response: 500 OOPS: child died Can someone help me with this?

    Read the article

  • Manually accessing GMail via IMAP

    - by Jeff Mc
    I'm trying to connect to gmail imap, but I am unable to execute any commands after login. I'm running openssl s_client -connect imap.gmail.com:993 to connect then, * OK Gimap ready for requests from 128.146.221.118 42if6514983iwn.40 . CAPABILITY * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA XLIST CHILDREN XYZZY SASL-IR AUTH=XOAUTH . OK Thats all she wrote! 42if6514983iwn.40 . LOGIN {email removed} {password removed} * CAPABILITY IMAP4rev1 UNSELECT LITERAL+ IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE . OK {email removed} authenticated (Success) . CAPABILITY at which point it simply hangs with the connection open. I'm guessing gmail pushes you off to a node in a cluster after it authenticate me?

    Read the article

  • PSAD Firewall/ UDP flood?

    - by Asad Moeen
    Well I'm actually trying to block a UDP Flood on the Application port because the string "getstatus" is causing my application to make large output due to a small input to the attacker's IP. I installed PSAD firewall to do the job. psad -S shows 3000,000 logged packets at the application port and top ports in Scan but does not block the IP of the attacker however other IP Addresses with small number of connections are dropped. I'm thinking that since output is also being made to the attacker, this is why its not getting blocked because iptables rate-limiting is also exactly doing the same thing and not blocking the IP where outgoing connection is also made. Any guesses why it won't work?

    Read the article

  • Internet Dropping?!

    - by stead1984
    I have a virtual DC running DNS and Routing and Remote Access, that routes ALL workstations Internet traffic out to the Internet, this works fine but noticed that the Internet drops occasionally. I've checked with our service provider (Managed Communications) and they are adamant that it's not their fault. The Internet drops seem to affect everyone. We also have a server configured to use the same Internet service on a different network over a site-to-site VPN connection which also suffers from packet drops. I've spoken to Cisco and have done many tests with Cisco and they believe the problem is down to the ISP. I'm wondering if it's a DNS issue, as the Internet service uses OpenDNS. Any ideas?

    Read the article

  • How do I set up an sftp user to login with a password to an EC2 ubuntu server ?

    - by Doron
    Hello, I have an Ubuntu Server running on an EC2 instance. To login to that server I use a certificate file without any password. I've installed and configured vsftpd and created a user (let's call him "testuser") for which I've set a /bin/false ssh terminal so it will only be able to connect via sftp and upload/access files on his home directory. However - when I try to connect to the server from my computer, running sftp testuser@my-ec2-server I get Permission denied (publickey). Connection closed messages so I can't log in. How can I remove the certificate requirement for this user only (meaning, the "ubuntu" user will still have to use the certificate file to login via ssh), so normal sftp clients will be able to connect using a username and a password ? Thank you. PS Using Ubuntu Server 10.10 official AMI from canonical, 64bit on a micro instance.

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Adding a second IP address for IIS - static vs dynamic A records

    - by serialhobbyist
    I'm looking to add a second IP address to IIS so that I can run two sites with different SSL certificates. When I added one on my play box and ran ipconfig /registerdns both addresses were registered in DNS with the server's name. So, I deleted the A record for the new IP address and rebooted. That also registered both names. So, then I went into the network config for the adapter and, on the DNS tab, unchecked "Register this connection's addresses in DNS". I deleted the A record for the new IP address again and re-ran ipconfig /registerdns. This time, it deleted the A record for the old IP address and didn't created one for the new address. Neither of these is what I want: I want the main IP address to be registered and refreshed automatically as a dynamic DNS record and the second IP address to be registered and managed as a static address. Is there any way to achieve this?

    Read the article

  • Why is my ipad's wireless so flakey?

    - by Mark
    I'm the proud owner of a new IPad here in the UK. All is good, except for the wifi, which is a bit flakey. It connects fine to my Draytek router which is set for WPA/WPA2 and 56g only, displaying full signal strength. Then, after a few minutes, it goes down to minimum strength... And sometimes it goes back up again. A few times, it seems to loose connection completely, and needs to be turned off and on again. I've looked at the Apple support site, and have tried their recommendations (which are not really very relevant), but still nothing. I've tried setting the router to wpa2 only, and setting long-preamble. Right now, I guess I want to know if it's a hardware problem with my device and should be returned, or if it's a problem with all ipads which will be resolved. Guess I could take it back to the Mac genius bar, but I find those guys so incredibly pretentious and, frankly, rather useless, that i'd rather wait until I've exercised other options!

    Read the article

  • how to connect public web server to internal LAN

    - by DefSol
    I have a VPS which is my public web server for all my clients. It's running server 2008 and I would like to have it connect via secure connection to my internal LAN. I would like this to be a route so access is bi-derectional. Have read about Server & Domain isolation, but am concerned this may prevent public views to the webs sites on the server. I currently have a PPTP tunnel, but I'm wanting better security (IPSec or SSL etc) and it's not given my bi derectional access. (In fact my backups aren't copying accross but this could be an acl issue) The goal is to provide easy/automated backups of data & sql db's to my internal LAN, as well as a means to provision new sites & db's from a workflow occuring internally. Internal lan is windows based with ISA 2006 at the perimeter. Thanks

    Read the article

  • GlassFish Extension for Oracle JDeveloper

    - by Shay Shmeltzer
    We just release a new version of Oracle JDeveloper - 11.1.2.3. One new feature here is built-in support for GlassFish. This include the ability to create an "application server" connection to GlassFish and then deploy to that server with one click from inside JDeveloper. You can use this for deploying Oracle ADF Essentials application on Glassfish, but you can also use it to deploy any Java EE application you build in JDeveloper on GlassFish. However, if you are planning to work with GlassFish and JDeveloper on a more regular basis as your development server, then you might find my new extension useful. The new extension allows you to start and stop an external GlassFish instance, as well as start it in debug mode (which will allow JDeveloper to remotely debug your application as it runs on the server. I also added a button that will invoke the web admin console of Glassfish. Here is a quick demo that will show you how to work with the extension: The extension is available from help->check for updates, or you can download it directly from here, and then use help->check for updates pointing to the local zip file.

    Read the article

  • Limit maximum incoming connections to a port using iptables

    - by Harley
    I have a server that has apache listening on a number of ports. Some ports are used for configuring the server, and another is used to download large files. My problem is that when I have a large number of clients downloading files, the web interface is uncontactable. I would like to limit the number of clients connecting on the "large file" port so that apache always has available connections to configure the server. A REJECT is fine, the client trying to download the file will back off and retry later. Each client only has one connection open to the server at a time, so limiting by IP won't work. I know I could put something in front of apache to manage this, but I'd really like to do it in iptables, without adding more software.

    Read the article

  • SSH Port Forward 22

    - by j1199dm
    I'm trying to set up the following: At work I want to create a local port that will forward to port 22 on my home server. ssh -L 56879:home:22 username@home -p 443 right now I'm testing this on my two machines at home, my ubuntu server and the other my iMac. iMac: 192.168.1.104 ubuntu: 192.168.1.103 iMac - ssh -p 443 -L 56879:192.168.1.103:22 [email protected] in my ~/.ssh/config on my iMac I have port set to 56879. so when I do git pull remoteserver:/path/to/repo.git on my iMac git will use ssh client on my iMac and use port 56879 since setup in config which should forward to 22 on my ubuntu machine. I keep getting connection refused? Any ideas?

    Read the article

  • Facebook doesn't work on computer, but work on mobile device, both use the same router

    - by sasa
    I have a very strange problem and I'm thinking that can be problem with dns or something similar, but not sure and don't know how to solve. My computer is connected to router and every site works fine except facebook (Chrome and Firefox). Chrome shows "Error 101 (net::ERR_CONNECTION_RESET): The connection was reset." But, on mobile device witch is connected to the same router facebook works fine (Fb application and Delphin browser). Pinging facebook works fine. Clearing cookies and cache didn't help. Also, I performed antivirus and antimalware scan and there is nothing. What can be a problem? Update: I'm also connect notebook on that wifi router, and on it works fine. nslookup facebook.com Server: UnKnown Address: 192.168.1.1 Non-authoritative answer: Name: facebook.com Addresses: 2a03:2880:2110:3f01:face:b00c:: 2a03:2880:10:1f02:face:b00c:0:25 2a03:2880:10:8f01:face:b00c:0:25 69.171.224.37 69.171.229.11 69.171.242.11 66.220.149.11 66.220.158.11

    Read the article

  • GDM login screen is not displayed with VNC

    - by niboshi
    Hi, I set up VNC server with xinetd. Also configured GDM so that XDMCP is enabled. VNC connection seems okay, but GDM login screen is not shown. Instead I can only see old bare X screen (gray meshed background and X-shaped mouse pointer), which I can't do any interaction with it. What can I do to fix the problem? No log is written below /var/log/. Server distribution: Ubuntu marverick /etc/xinetd.d/vnc is like below: service vnc1024 { disable = no socket_type = stream protocol = tcp wait = no user = nobody server = /usr/bin/Xvnc server_args = -inetd -query localhost -geometry 1024x768 -depth 24 -once securitytypes=none port = 12345 } /etc/gdm/custom.conf: [daemon] [security] DisallowTCP=false [xdmcp] Enable=true [gui] [greeter] [chooser] [debug] [servers] /etc/services is also configured. Thanks

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • Configuring vsftpd with nginx on ubuntu

    - by arby
    I have vsftpd installed on Ubuntu 12.04LTS along with nginx, php, and sql on an Amazon ec2 instance. The web server is good to go, but I'm having trouble connecting to the FTP server. I'm not quite sure how to set the privileges or what configuration options I might be missing. By default, the location of the web root is at /usr/share/nginx/www and it is owned by root:root. The web server runs as user www-data in the group www-data. I've opened port 21 and set the passive ports in the ec2 backend and ufw firewall. In vsftpd.conf, I have: ... anonymous_enable=NO local_enable=YES local_umask=0027 chroot_local_user=YES pasv_enable=YES pas_max_port=12100 pasv_min_port=12000 port_enable=YES ... Now, I'm unsure how to create the FTP user that when I login, displays my web directory with write access. I've tried it a few different ways, but I keep running into errors (either no connection, no write access, or very slow timeouts.)

    Read the article

  • Backup Dropbox to Amazon Glacier

    - by joekr
    i'm using Dropbox for Backup which means i keep all my files in my Dropbox folder (encrypted using encfs but that should not be relevant). I like this solution because it is automatic and keeps copies of my files on several machines at different locations. The only thing i could see go wrong is that Dropbox has some sort of bug that tells all my machines to delete the files. So currently i do a Backup of the Dropbox folder to an external Harddrive. With Amazon Glacier it seems affordable to automate Backup snapshots of my Dropbox. What i am looking for is a tool that will do this for me - the base case scenario would be that files would go from Dropbox (using their API) directly to Amazon as uploading the ~80GB from my home connection would take forever... Thanks!

    Read the article

  • Virus blocking incoming connections ?

    - by Benoît
    Hello, on my windows 2003 server, all incoming connections are dropped. I can see them getting in using Wireshark, but even a single ping from another computer fails. All locally initiated connection work fine (i'm asking from the server). This server is the DC/DHCP/DNS/File server, so computer clients are in the dark. I've run varius antivirus and removal tools without any luck. The Windows Firewall is disabled. I'm wild-guessing at some virus/worm. How can i check why these incoming ICMP/TCP SYN/etc are dropped ? Anyone has any knowledge about such situations ? Thanks.

    Read the article

  • Error when trying to access Shared files from iMac via smb

    - by SatheeshJM
    I used to access all my Windows XP shared files on my Mac using Finder -- Window -- Connect to server. Now all of a sudden, an error crops up when I try to connect. I get the error "There was a problem connecting to the server "192.168.1.*" The server may not exist or it is unavailable at this time. Check the server name or IP address, check your internet connection and then try again. How can I remove this error and access my shared files from my Mac? P.S my network connections is fine.

    Read the article

  • 12.04 LTS boot hangs at "SP5100 TCO timer: mmio address 0xfec000f0 already in use", didn't yesterday

    - by DarkIron112
    Dual-booting Windows 7 and Ubuntu 12.04 LTS. I went to reboot from Win to Ubu, and found a few interesting things. My POST screen is covered in blocks of epileptic colors until I hit GRUB, which continues when I try to boot into Ubuntu. These color blocks don't appear when I use my on-board VGA, so I'll just attribute to that. Grub dimensions are swapped (card vs onboard, probably), but, when interfacing with onboard VGA, the Grub Timeout Counter works and when using my card, it does not (see "[!!!]" below for more information) Booting into Ubuntu directly causes the error: SP5100 TCO timer: mmio address 0xfec000f0 already in use Booting into recovery mode, meanwhile, and then "resuming normal boot" gets me to the desktop without native 1440x900 resolution and graphic drivers can't tell the monitor it's looking at (I assume this is because it's not a full graphic boot, and as it says, some drivers won't run?) [!!!] When I reboot after going into recovery mode, the countdown timer works ONCE, puts me back into default ubuntu boot, and then does not work again until after another recovery-mode boot. Windows 7 can boot perfectly with no issues whatsoever from epilepsy color blocks or driver detection. This makes me wonder /why/ the POST screen can't handle my video card anymore. Amidst all the diagnostics, I opened my case and re-seated the videocard securely, ensuring it wasn't a loose connection-- But this did nothing to help me. Hardware I am running an NVidia GeForce GTX 8800 video card in a PCI slot. I have 4.8GiB memory, an AMD Athlon II Quad-core 640 Processor, on an MSI K9N6GM Series Mobo. Onboard video is an NVidia GeForce MCP61(V/S/P) card. Note: I did not have any of these problems yesterday, and I have been using Ubuntu intensively for a week, though it's been working flawlessly for months. I've recently been using it to mod my Android phone, perhaps I messed something up in the file system?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >