Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 593/763 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by user55859
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • Using wildcard domains to serve images without http blocking

    - by iopener
    I read that browsers sometimes block waiting for multiple images from the same host, and I'm trying to do everything I can to speed up page load times. One caveat: I need to serve files over HTTPS. Any opinions about whether this is feasible: Setup a wildcard cert for *.domain.com. Whenever I need an image, generate an number based on a hash mod 5 of the filename, and append it to an 'img' subdomain (eg img1.domain.com, img4.domain.com, img3.domain.com, etc.); the hash will make any filename always use the same subdomain, and therefore the browser should be able to cache the images Configure a dynamic virtualhost record to point all img#. subdomains to /var/www/img I am looking for feedback about this plan. My concerns are: Will I get warnings when my page has https:// links to multiple subdomains? Is the dynamic virtualhost record I'm talking about even possible? Considering the amount of processing this would require, is it likely to even produce any kind of overall benefit? I'm probably averaging a half-dozen images per page, with only half being changed on each page refresh. Thanks in advance for you feedback.

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Connecting to same public IP from different locations yields different results

    - by DHall
    Since yesterday I've been unable to access one of my favorite time-wasting sites, boston.com. It starts to load but then it gets redirected to pagesinxt or something like that. After some investigation, I've narrowed it down to an issue with cache.boston.com, but only from my work location. I found the IP (216.38.160.107) , but even that doesn't work correctly from here at work. When I do a telnet 216.38.160.107 80 GET http://cache.boston.com/universal/css/hp_bgcom.css from another location, I get a nice long CSS, as expected. From here, I get an error (trimmed for size): HTTP/1.1 400 Bad Request Your request could not be processed. Request could not be handled This could be caused by a misconfiguration, or possibly a malformed request. For assistance, contact your network support team. Is there any way I can troubleshoot this further on my end? Tracert doesn't tell me anything too useful: Tracing route to vwrpx1.ttn.xpc-mii.net [216.38.160.107] over a maximum of 30 hops: 1 * * * Request timed out. Since it's not really work-related, I don't really want to bring it up to our network team unless I know what's going on, or if there's some risk to the network (ex. malware or something)

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • How to embed/hardcode SRT subtitles into mp4 videos with VLC?

    - by Jens Bannmann
    I'm looking for a way to "burn in" or render/rembed/hardcode subtitles (from an SRT file) into an MP4 video with VLC. But no matter what options I use, it never works properly. I get a file that plays video way too fast (audio is normal), or one that plays normally, but actually does not have embedded subtitles. Also, with some options (like the one below) it does not play in QuickTime, only in VLC. So the main question is: how can I make this work in VLC? Secondary questions are: How do I decide which options I should set? Which settings are best if I want to leave the file bitrate etc. the same as much as possible, only embed subtitles? It seems I cannot leave the field empty or Video/Audio unchecked, so I guess I would first need to figure out the original audio and video bitrate. What do the "Scale" and "Channels" options mean? ... none of which are answered within the VLC documentation. For example, this is one set of options I used in the "Advanced Open File…" dialog: Advanced Open File… myFileName.mp4 [ ] Treat as a pipe rather than as a file [x] Load subtitles file: mySubtitleFileName.srt [ ] Play another media synchronously [x] Streaming/Saving Streaming and Transcoding Options [ ] Display the stream locally (o) File [outputFileName.mp4 ] [ ] Dump raw input Encapsulation Method: (MPEG 4 ) Transcoding options [x] Video (mp4v ) Bitrate (kb/s) [256 ] Scale [1 ] [x] Audio (mp3 ) Bitrate (kb/s) [128 ] Channels [1 ]

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • Can I format a usb stick on windows xp, in HFS+ format, and make the usb stick mac os x bootable?

    - by user717236
    My Intel Mac OS X computer is corrupt and I feel, at this point, I need to perform a fresh install of the OS. It consistently and automatically logs out, right after I log in. I tried logging in as the root. I tried safe boot and it wouldn't load. Anyway, the point is I want put the Mac OS X installer on a USB thumb drive and have it boot up on the Intel Mac OS X computer. Unfortunately, the computer is inaccessible, as I mentioned above. So, I have a Windows XP machine that I'm using and attempting to create a bootable USB thumb drive that's compatible with Mac OS X. I have tried transmac, macdrive, and paradox for windows -- all of which proved unable to format the usb stick in HFS+. How do I know this? Well, even though the Transmac reports that's been formatted to HFS+, Computer Management in Windows says otherwise: I even put the installer on the usb drive, after transmac reportedly formatted it properly, and the mac os x computer didn't even recognize that a USB thumb drive was inserted, via pressing the option key at boot-up time. I'm not sure what the problem is and how to actually format the drive. Can anybody offer any help? I would appreciate it. Thank you.

    Read the article

  • Improving performance by using an additional static file server

    - by Max
    Hello there, I´m planning for a large website that includes many static assets (js, css, images and thumbnails) in the generated pages. That website will use TYPO3 as CMS (is is a customer requirement). I guess I could seriously improve performance / page load times by using a two server setup. One server where the main application (PHP) runs and another one where the static files sit being served by a trimmed down version of apache or something like lighthttpd. Including e. g. js or css files from the file server is of course no big deal. Just use an absolute url http://static.example.com/js/main.js and be done with it. But: that website will have pages with MANY thumbnails of e. g. product images on it. So I see two problems when the main application tries to create a thumbnail of some image: the original image like products/some.jpg is uploaded on the static file server and therefore not on the same server as the PHP application which tries to create the thumbnail. TYPO3 writes created thumbnails to a temp directory which is expected to be on the same server. Therefore, hundreds of thumbnails will be written and served from that temp directory which is on the same server as the main application - the static file server is in that case basically useless, all thumbnails will be requested from the server of the main application. So, my question is: how to overcome this shortcomings? Is it possible to "symlink" some directories to another server? So, for example, if PHP tries to open the original products image for thumbnail creation with imagecreate("products/some.jpg") the products folder actually "points" to the products folder on the static image server? I know something like this can be done with .htaccess but is it possible on file system level?

    Read the article

  • FastCGI on lighttpd no data received

    - by Michael Sh
    I have a simple FastCGI script: public static void main (String args[]) { int count = 0; while(new FCGIInterface().FCGIaccept()>= 0) { count ++; System.out.println("Content-type: text/html\n\n"); System.out.println("<html>"); System.out.println( "<head><TITLE>FastCGI-Hello Java stdio</TITLE></head>"); System.out.println("<body>"); System.out.println("<H3>FastCGI Hello Java stdio</H3>"); System.out.println("request number " + count + " running on host " + System.getProperty("SERVER_NAME")); System.out.println("</body>"); System.out.println("</html>"); } } Set up with lighttpd as: server.modules += ( "mod_fastcgi" ) fastcgi.debug = 1 fastcgi.server = ( "/cgi" => ( "fastcgi" => ("port" => 8888, "host" => "127.0.0.1", "bin-path" => "/var/www/tiny.fcgi", "min-procs" => 1, "max-procs" => 1, "check-local" => "disable" )) ) In the log: 2012-11-24 04:35:04: (mod_fastcgi.c.1367) --- fastcgi spawning local proc: /var/www/tiny.fcgi port: 54321 socket max-procs: 1 2012-11-24 04:35:04: (mod_fastcgi.c.1391) --- fastcgi spawning port: 54321 socket current: 0 / 1 2012-11-24 04:35:39: (mod_fastcgi.c.3061) got proc: pid: 0 socket: tcp:127.0.0.1:54321 load: 1 The problem is that there is no data being sent from the server to browser. Am I missing something here?

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • Dell XPS 1530 DVD-RW firmware problem

    - by josecortesp
    Hello everyone. I have this XPS laptop since a year and a half ago. About 2 months ago, with the waranty expired, i tried to run the optiarc slot load DVD-RW firmware update, and said everything was okay. Then I restarted it and the problem started: Now, every single time I turn on my computer, it get stuck at the BIOS POST until the drive sounds like "ready" and then the computer starts normally. And this happen EXACTLY the same when it's getting back from sleep. I'm pretty sure is not a software issue, because I tried with Vista Home Premiun 32 bits, Ubuntu (from 8.10+)32 & 64, and with W7 64. Already tried to run the firmware installer like a million time, in case it is a failing install with no luck. Also, google it to see if someone has the same problem, and again, no. The Drive performs pretty okay once the System is on, but waiting to the drive to be ready everytime is really annoying. The Firmware I updated was this, and the drive is: K937C Assembly, Dvd+/-rw, 8, SLOT, 1530 Sony Nec Optiarc Inc. I'll apreciate any help you can give me

    Read the article

  • Hard Drive that was used before not detected or accessible in Windows 7

    - by Anders
    Hello SU: My PC crashed for some unknown reason, and I am still working on what caused that. However, I pulled my main (windows) drive from my computer and hooked it up to my roommate's machine and was able to pull the data I needed off of it (i.e. the drive is good). I hook up his drives as they were, I had to turn off his machine and unplug his secondary drive to hook mine up, boot his machine and there is no second drive available in windows explorer. I opened Device Manager to see if for some reason it's drive letter got un-assigned, but there is nothing listed in there except his primary hard drive, his optical drive and one other optical drive which I believe is the virtual drive Daemon Tools made. The drive shows up in the BIOS, however after I restarted his machine again it sits on the "Entering setup....." screen at the load window. The only thing I can think of is that may have messed with stuff is I used this tutorial to create a bootable XP install on a USB drive to install XP on my machine (I am 99% certain that the optical drive in my PC is broken) and maybe it used the other hard drive's letter for the USB drive for some reason, which doesn't make much sense since it was recognized it as a different drive letter before I started the process. It is possible that it used the secondary hard drive's letter for it's work, but once again I am uncertain. Where should I go from here? He his bound to wake up within the next several hours and will probably flip a lid if I cannot get some sort of handle on this. Any and all help is greatly appreciated. PS: Anyone who helps me get this situated has a beer or two on me, as long as you are in the greater metro Detroit area, or don't mind traveling a bit!

    Read the article

  • Serve a specific set of error pages for different subdirectories

    - by navitronic
    I am currently trying to setup 2 different sets of Error documents for separate folders within a website. I have 2 folders within the root of a site: demo/ live/ Any requests that return 404's or 403's within the demo folder needs to load one set of pages for the Apache errordocuments, eg. ErrorDocument 404 /statuses/demo-404.html ErrorDocument 403 /statuses/demo-403.html And the live needs to go to similarly name files. ErrorDocument 404 /statuses/live-404.html ErrorDocument 403 /statuses/live-403.html So far I have tried placing an .htaccess file in both directories with the ErrorDocument directives setup pointing to the specific files, the 404 works fine and references the correct page. However, the 403s do not work and revert to the server default when trying to access forbidden folders within the demo directory, the logs indicate the following: [Wed Jun 16 04:47:44 2010] [crit] [client 115.64.131.144] (13)Permission denied: /home/abstract/public_html/demo/xxx/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is this correct? Would apache revert to default because it is trying to look for the htaccess in a folder it doesn't have permission in? Why wouldn't it work it's way back through the folder tree? Can I make it do this?

    Read the article

  • IIS 7 and ASP.NET State Service Configuration

    - by Shawn
    We have 2 web servers load balanced and we wanted to get away from sticky sessions for obvious reasons. Our attempted approach is to use the ASP.NET State service on one of the boxes to store the session state for both. I realize that it's best to have a server dedicated to storing sessions but we don't have the resources for that. I've followed these instructions to no avail. The session still isn't being shared between the two servers. I'm not receiving any errors. I have the same machine key for both servers, and I've set the application ID to a unique value that matches between the two servers. Any suggestions on how I can troubleshoot this issue? Update: I turned on the session state service on my local machine and pointed both servers to the ip address on my local machine and it worked as expected. The session was shared between both servers. This leads me to believe that the problem might be that I'm not using a standalone server as my state service. Perhaps the problem is because I am using the ip address 127.0.0.1 on one server and then using a different ip address on the other server. Unfortunately when I try to use the network ip address as opposed to localhost the connection doesn't seem to work from the host server. Any insight on whether my suspicions are correct would be appreciated.

    Read the article

  • Find slow network nodes between two data centers

    - by 2called-chaos
    I've got a problem with syncing big amount of data between two data centers. Both machines have got a gigabit connection and are not fully occupied but the fastest that I am able to get is something between 6 and 10 Mbit = not acceptable! Yesterday I made some traceroute which indicates huge load on a LEVEL3 router but the problem exists for weeks now and the high response time is gone (20ms instead of 300ms). How can I trace this to find the actual slow node? Thought about a traceroute with bigger packages but will this work? In addition this problem might not be related to one of our servers as there are much higher transmission rates to other servers or clients. Actually office = server is faster than server <= server! Any idea is appreciated ;) Update We actually use rsync over ssh to copy the files. As encryption tends to have more bottlenecks I tried a HTTP request but unfortunately it is just as slow. We have a SLA with one of the data centers. They said they already tried to change the routing because they say this is related to a cheap network where the traffic gets routed through. It is true that it will route through a "cheapnet" but only the other way around. Our direction goes through LEVEL3 and the other way goes through lambdanet (which they said is not a good network). If I got it right (I'm a network intermediate) they simulated a longer path to force routing through LEVEL3 and they announce LEVEL3 in the AS path. I basically want to know if they're right or they're just trying to abdicate their responsibility. The thing is that the problem exists in both directions (while different routes), so I think it is in the responsibility of our hoster. And honestly, I don't believe that there is a DC2DC connection which only can handle 600kb/s - 1,5 MB/s for weeks! The question is how to detect WHERE this bottleneck is

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • Setup: Eclipse in Ubuntu with Apache2 and Subversion

    - by Ricalsin
    Trying to setup Eclispe. I am running ubuntu 10.10 (Maverick). Apache2.2.16 Subversion 1.6.12 The Eclipse help/about/installed software says: Eclipse Platform 3.5.2 Subclipse 1.0.0 Version Control with Subversion 1.1.1 The Subclips wiki I followed is here I have installed the libsvn-java app as discussed. I added the line "-Djava.library.path=/usr/lib/jni" to the eclipse.ini file I checked the Eclipse help/about/confirguration settings and both of these lines are listed: eclipse.vmargs=-Djava.library.path=/usr/lib/jni java.library.path=/usr/lib/jni I checked that those files are in those directories. Still, when I check the preferencesteamsvn an error dialog shows: Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl.1 in java.library.path Incompatible JavaHL library loaded 1.3.x or later required I followed the "Testing JavaHL libraries" troubleshooting section at the bottom of the wiki: I downloaded the tarbal and ran it in a folder on my desktop with no problems. Then, I followed the instructions and placed that file INSIDE the path (usr/lib/jni/testJavaHL) and ran it from there. There are 50 tests performed and each one of them came back with this same error (posting only one for brevity): 50) testCommitRevprops(org.tigris.subversion.javahl.BasicTests)java.io.FileNotFoundException: /usr/lib/jni/testJavaHL/local_tmp/greek_files/iota (No such file or directory) at java.io.FileOutputStream.open(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:209) at java.io.FileOutputStream.<init>(FileOutputStream.java:160) at org.tigris.subversion.javahl.WC.materialize(WC.java:70) at org.tigris.subversion.javahl.SVNTests.buildGreekFiles(SVNTests.java:303) at org.tigris.subversion.javahl.SVNTests.setUp(SVNTests.java:222) at org.tigris.subversion.javahl.RunTests.main(RunTests.java:111) FAILURES!!! Tests run: 50, Failures: 0, Errors: 50 Any ideas as to how/why the "local_tmp/greek_files/iota" is appended to the directory? I assume that's my problem.. I'm also having a problem with newrepository location = ...as the directory location of my svn repository is one level above the home directory - which is prepended to whatever I place in the dialog box - resulting in this error: svn: '/home/ricalsin/file:/home/svn' does not exist Thank you for any help.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >