Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1307/3479 | < Previous Page | 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314  | Next Page >

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • Trouble configuring sendmail to relay mail

    - by Warren Schubert
    I am trying to configure sendmail and ufw on an ubuntu server (ServerA) so that another server (ServerB) can send mail through it. In my /etc/mail/access file I have the following line (a.b.c.d is the IP of ServerB): Connect:a.b.c.d RELAY My ufw status shows the following rule I added: 25/tcp ALLOW a.b.c.d When I telnet from ServerA I get through: telnet localhost 25 When I telnet from ServerB I don't (w.x.y.z is the IP of ServerA): telnet w.x.y.z 25 telnet: Unable to connect to remote host: Connection refused I did restart the sendmail daemon after editing the access file. What could I be missing? Something in sendmail.mc? Edit: netstat -an|grep -w 25 tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN

    Read the article

  • Switch 302 redirect to 301 with Apache 2 ProxyPass in front of Tomcat 6

    - by Gearóid
    I'm trying to optimise my site for SEO and it seems as though their is a 302 direct in action for the http requests. I'm hosting my app on a Tomcat 6 server which lies behind an Apache 2 server. I use the ProxyPass method (http://tomcat.apache.org/tomcat-6.0-doc/proxy-howto.html) to forward all requests to port 8080 (the port my app is hosted on). I've seen a lot of advice on how to set the redirect type when using the VirtualHost method but none to do with ProxyPass. The app is a Struts app that forwards users on to index.jsp when they hit the base url. Could this also be the issue? I'm grateful for any help on this one! Cheers!

    Read the article

  • Bugzilla random lag and CPU usage

    - by Serty Oan
    Here is the configuration : Bugzilla (3.4.2) application running on a Ubuntu server 8.04 with Perl 5.8.8. Here is the problem : Sometimes (randomly), pages take very long to load. It can be any of the login page or query.cgi page or buglist.cgi... etc... Using top on the server, I tried to see what was wrong, and saw that sometimes a bugzilla script would use 30% memory and last pretty long (1 to 5 seconds) and sometimes not even show in the list because it responded too quickly or just use 2%. The mysqld process does not seems to be the problem.

    Read the article

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • Access 2 sites on same machine behind a router

    - by Luc
    Hello, I have several machines on my lan. On of them is running 2 web sites, first_web_site and second_web_site (each one in a dedicated NameVirtualHost). Another machine is running another site third_web_site. I would like to be able to access each one, within internet, with the url: first_web_site.domain.tld second_web_site.domain.tld third_web_site.domain.tld knowing that 2 sites are on the same machine. Can Apache help me to do this ? I have a machine that will have a apache server to be used for proxy purposes. I was talk to set up virtualhost on this one and use proxy server but I do not know how to do this. Could you please give me hints ? Thanks a lot, Luc

    Read the article

  • Clean URLS with mod rewrite and URL Encoded characters causes 404?

    - by Richard JP Le Guen
    I have a web site using mod_rewrite to get some clean urls and custom 404 pages. My .htaccess file looks like this: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?clean_url=$1 [QSA,L] </IfModule> What puzzles me is that if the URL contains a %2F (url-encoded /) the server seems to force a 404. As an example, http://example.com/category/article would be a normal article, but then http://example.com/category%2farticle gives a server-generated 404 page. (not the custom 404 page) I wouldn't have expected this... why this is happening? Is there a way around it?

    Read the article

  • Mapping skydrive as network drive in macos

    - by vittore
    as you probably know, if you have windows live account you can use free skydrive 25 gb storage. Even more a lot of people know that if you go to your skydrive in browser and copy cid query parameter value (https://...live.com/...&cid=xxxxxxxx ) you will be able to map skydrive as network drive in windows using this network pass \[cid].docs.live.net[cid]\ I do now that if you have network share like \server\folder i can map it in macos too, as smb://server/folder. however it is doesn't seem to be a case with skydrive when i try to map it as smb://[cid].docs.live.net/[cid] finder tells it can't connect. Anyone know how to map it ?

    Read the article

  • Creepy MySQL error during hard work

    - by Kiewic
    Hi, I have a MySQL database installed on a OpenSuse 11.1 server (it is a Bitnami image). The database works fine, it can stay many days without any error, but when MySQL receives a huge amount of transactions, it dies immediately. The next screen shows the error: Moreover, I don't know how to restart MySQL. I have tried this: /opt/bitnami/mysql/bin/mysqld start But it doesn't work, that gives me the next output: 110209 17:09:01 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root! 110209 17:09:01 [ERROR] Aborting 110209 17:09:01 [Note] /opt/bitnami/mysql/bin/mysqld.bin: Shutdown complete It doesn't matter which kind of statements are executing, if they are a huge amount, MySQL dies. The MySQL server version is 5.1.30 What can be causing these sudden failures?

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Linux script to find time difference and send an email if need

    - by Gnanam
    Hi, I'm not an expert in writing shell scripts but also I'm looking for a very specific solution. OS: CentOS release 5.2 (Final) I've a Java standalone which keeps writing (all System.out.println) to a log file. For some unknown reason, this Java standalone stops working at some point of time in my server and eventually logs writing also stops working. I want to have a script which checks the last modified date & time of the log file with current date & time in the server. If the time difference exceeds more than 5 minutes, I want to send an email immediately to my recipients list. This way I'll come to know when this Java standalone has stopped working. I'll move this script to crontab and make it run for every 1 minute, so that this whole process is automated. Log file location: /usr/local/logs/standalone.log

    Read the article

  • Two mail servers, need help with dns configuration for the backup one

    - by user92231
    I need to run a redundant backup mail server in case the main one goes down. The settings in GoDaddy look something like the following: A (Host) Host Points to @ ip address of mail1 41.x.x.x mail1 ip address of mail1 41.x.x.x mail2 ip address of mail2 196.x.x.x MX Priority host points to 10 @ mail1.mydomain.com 20 @ mail2.mydomain.com When mail1 goes down, mail2 is able to get emails. I can access it through the browser with no problem, but I want my users to able to pop3/smtp as well without changing anything in their outlook. I dont want any impact to the users when mail1 is down. Also, I'm using the windows server DFS to keep both folders of the mails in sync. Is this the right way, or should I be using something else?

    Read the article

  • Backup broken PostgreSQL 8.4 without pg_dump

    - by Daniil
    So. I have a problem. PostgreSQL 8.4 won't start or restart without any output given. But it worked for 3 monthes until hosting provider doesn't rebooted server. Now it is completly broken. It wan't start and doesn't give any output or log. pg_dump: [archiver (db)] connection to database "postgres" failed: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? Now I want to backup (or just start pgsql socket) my database to reinstall postgesql. How?

    Read the article

  • Client can't reach my production webserver. It's their ISP's fault, but now what?

    - by MikeN
    I have a customer in Michigan who can't access my production SaaS webserver that is hosted on Slicehost. All other companies across the US/Canada/Europe have no problem reaching the site. This problem is occuring intermittantly, and Slicehost customer service says it's a problem with the client's ISP. I got the IP address of my client, and ping'ing that IP address from my PROD server fails, but ping'ing the IP address from my dev box or our seperate blog server (also hosted on slicehost) works. How do I debug a problem like this? I asked the client to reach out to their local ISP and ask about this problem. A traceroute shows that the packets are getting stopped on a Comcast Michigan node which is the client's ISP. Is there anything I can do additionally to fix this problem for my client?

    Read the article

  • Disable .net completely in a IIS6 Application Pool

    - by David L.-Pratte
    we're managing some web sites for our clients on our servers, some running Windows Server 2003 R2 and others running 2008 R2. In Windows Server 2008 R2, we can disable completely .NET framework usage for some application pools, which is great since most of our websites are still using classic ASP. After some issues with classic ASP applications being configured to run as ASP.NET 4 in a CLR 2.0 pool, we wanted to do the same thing in IIS6 - that is, have application pools without any .NET support. Is this a supported scenario in IIS6? Thanks

    Read the article

  • Getting error while running apt-get update

    - by pradeepchhetri
    I am getting the following error while running apt-get update on all of the servers. W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8B48AA6247928553 W: Some index files failed to download, they have been ignored, or old ones used instead. The solutions available are: To login into each of the server and run the following command to import the gpg key for that repo. sudo gpg --keyserver hkp://subkeys.pgp.net --recv-keys 8B48AA6247928553 sudo gpg --export --armor 8B48AA6247928553 | sudo apt-key add - But this requires logging into all of the individual servers and run the above two commands. I am looking for a way to correct by working on apt-repo server. Is there any way to do that.

    Read the article

  • 503 error Varnish cache when eAccelerator is started

    - by Netismine
    I have a Magento installation running on x-large Amazon server. I have Varnish, memcached and eAccelerator installed on the server. At first everything was working fine, but then at some point it stopped working, throwing 503 error with Varnish cache stamp below it. When I disable eaccelerator, error is gone and site is working. This is my eaccelerator config: extension="eaccelerator.so" eaccelerator.shm_size = "512" eaccelerator.cache_dir = "/var/cache/php-eaccelerator" eaccelerator.enable = "1" eaccelerator.optimizer = "1" eaccelerator.debug = 0 eaccelerator.log_file = "/var/log/httpd/eaccelerator_log" eaccelerator.name_space = "" eaccelerator.check_mtime = "1" eaccelerator.filter = "" eaccelerator.shm_ttl = "0" eaccelerator.shm_prune_period = "0" eaccelerator.shm_only = "0" eaccelerator.allowed_admin_path = "" any hints?

    Read the article

  • Instructions to setup domain controller

    - by Robert Koritnik
    Where could I get best step by step instructions (with some simple explanations) how to setup domain controller on Windows Server 2008 R2 Server Core? I don't know what do I need? Do I need DNS as well and AD and so on and so forth. I don't know enough about these things, but I need to set them up to prepare development environment. I would also like to know how to configure firewall on DC machine, to make it visible on other machines because I've setup DC somehow but I can't connect to it... This is my HW config: Linksys internet router with DHCP my dev machine is Windows 7 my DC machine is a VM in my dev machine my dev machine has a network adapter to linksys and a virtual adapter to DC DC machine has two network adapters: one to linksys (to be inetrnet connected) and one to host (my dev Win7 machine)

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • Apache: tmp is not writable

    - by Patrick
    hi, I've installed Drupal on a new webserver and I get the following errors: warning: is_writable() [function.is-writable]: open_basedir restriction in effect. File(/tmp) is not within the allowed path(s): (/customers/rollergirl.ch/rollergirl.ch:/var/www/diagnostics:/usr/share/php) in /customers/rollergirl.ch/rollergirl.ch/httpd.www/drupal/sites/all/modules/imagecache/imagecache.install on line 37. ImageCache Temp Directory /tmp is not writeable by the webserver. I guess this happen because the server is not configured with a writable tmp folder I don't have access to Apache configuration file (I only know for sure it is Apache). Could you suggest me what to do ? I can only contact web server service ? thanks

    Read the article

  • Linux-Vserver: How to do upgrade Debian 5.0 to 6.0 on vservers and main machine?

    - by Bartosz Kowalczyk
    I have server with debian lenny. I installed vserver on this server a few years ago. Summary I have 5 guest of vservers and main system, now. Each guest is debian lenny. Now, I'm wanting upgrade from lenny to squeezy on this servers (each Vservers and main machine). Do you do it? I should upgrade as usually system ? First I should upgrade every vserver next main machines and I have to do reset all machines and vservers? Please, advise me how to do it ?

    Read the article

  • What would you use to keep track of all the random information about your LAN/Enviroment?

    - by agnul
    Say you work in your average anarchy... no appointed IT guy, no Office-Manager whatsoever, just developers working in free-for-all mode and a couple of non technical people around for admin (non IT) stuff. What would you use to keep track of Hardware / Software inventory Backups of OS images, Drivers, and all that LAN configuration information Passwords for all the systems Currently I'm thinking of having some sort of wiki + web server on our only ''server-class'' machine, but then I fear I'd be the only one who would go through the hassle of adding information in the wiki... So, have you been through that? What would you suggest?

    Read the article

  • Adaptec 2020SA RAID controller Set up in RAID5, one drive went down, can I still get data?

    - by SJaguar13
    I have 6 1tb drives in a RAID5. 1 drive went down. On the RAID was 2 virtual machines that I really need back up and running. The spare drive I have to put in the server is a 1.5tb drive, which exceeds the physical per drive limit of the 2020SA. The drive is found in the disk utility, but it is not found in the array management section. I cannot add the drive to the array to have it rebuild. I have a replacement drive along with some spares on their way from Newegg, but I am still looking at a few days of downtime. Is it possible to use the 5 working drives to get the VMs copied off and on to another server or do I just have to wait for the drives to get here?

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

< Previous Page | 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314  | Next Page >