Search Results

Search found 10033 results on 402 pages for 'execution speed'.

Page 297/402 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • apache2 + mod_fastcgi + suexec + php5.2 = unstable on high load...

    - by redguy..pl
    I am hosting several (~30) different sites on one server with apache2+fastcgi+suexec+php5. Sites have different loads and different execution times of their scripts (some of them process request for 5-7 seconds, some <1sek). Sometimes when single site receives very high load (all php instances of this site are created and used) - whole apache server hangs. Apache (worker mpm) creates new processes up to the upper limit. It looks like it is starting to queue ALL new request for EVERY site, not only the one that has high load and quickly achieves process limits... restart of apache solves the problem... config: FastCgiConfig -singleThreshold 1 -multiThreshold 10 -listen-queue-depth 30 -maxProcesses 80 -maxClassProcesses 12 -idle-timeout 30 -pass-header HTTP_AUTHORIZATION -pass-header If-Modified-Since -pass-header If-None-Match (earlier have default -listen-queue-depth = 100, but it didn't change anything...) Any suggestions? Another question - how is implemented this listen queue? is it one queue for whole apache, or unique queue for every defined php apllication (suexec site)? I would like to achieve something like this: when one site receives high load and its queue is full - server bounces next request, but only for this one site.. Other sites should work properly...

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • Is it possible to detect nearby Wi-Fi enabled devices, not necessarily on the same network? [closed]

    - by Sky
    first question on StackExchange ever. I hope I got the right board. I'm trying to create a device (either from a standard AP or some other unconventional means) that will be able to detect nearby Wi-Fi enabled devices. For example, if a cellular phone (iPhone for instance) would be carried into the secured area, its MAC address will be logged. A cellular phone is a good example because it's the most common threat that should be detected. Some important points: The detection can be either active or passive, doesn't matter. The detected device might be connected to a different network, or might not be connected to anything at all. I assume most cellular phones are actively probing when not connected, but I'm not sure. It is important to not only identify the breach, but also to identify the device (MAC address). Conventional hardware is only optional. Distance of detection is at least 6 meters (20 feet). Handling one device at a time is good. Speed of detection is important, under 5 seconds is ideal. So my question is, is this even possible? If so, what can I use in order to make this a reality? Thank you for reading!

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Basic multicast network performance problems

    - by davedavedave
    I've been using mpong from 29west's mtools package to get some basic idea of multicast latency across various Cisco switches: 1Gb 2960G, 10Gb 4900M and 10Gb Nexus N5548P. The 1Gb is just for comparison. I have the following results for ~400 runs of mpong on each switch (sending 65536 "ping"-like messages to a receiver which then sends back -- all over multicast). Numbers are latencies measured in microseconds. Switch Average StdDev Min Max 2960 (1Gb) 109.68463 0.092816 109.4328 109.9464 4900M (10Gb) 705.52359 1.607976 703.7693 722.1514 NX 5548(10Gb) 58.563774 0.328242 57.77603 59.32207 The result for 4900M is very surprising. I've tried unicast ping and I see the 4900 has ~10us higher latency than the N5548P (average 73us vs 64us). Iperf (with no attempt to tune it) shows both 10Gb switches give me 9.4Gbps line speed. The two machines are connected to the same switch and we're not doing any multicast routing. OS is RHEL 6. 10Gb NICs are HP 10GbE PCI-E G2 Dual-port NICs (I believe they are rebranded Mellanox cards). The 4900 switch is used in a project with tight access control so I'm waiting for approval before I can access it and check the config. The other two I have full access to configure. I've looked at the Cisco document[2] detailing differences between NX-OS and IOS w.r.t multicast so I've got some ideas to try out but this isn't an area where I have much expertise. Does anyone have any idea what I should be looking at once I get access to the switch? [1] http://docwiki.cisco.com/wiki/Cisco_NX-OS/IOS_Multicast_Comparison

    Read the article

  • Two large, linked Excel files take 30 minutes to save, except in VMWare environment

    - by Gerald L
    I support some tax consultants who love to use Excel when they should probably be using Access. Anyway, they have created two Excel files, A and B. File B has cells linked to file A. File A is 27 MB and file B is 16 MB. One worksheet has roughly 1 million rows and there is another worksheet doing a whole bunch of SUMIF on the 1 million rows. Not the best idea, but whatever. Both Excel files open and recalculate within a reasonable amount of time (1-2 minutes). For a files that large, this is acceptable. Here is the problem: Once you change a cell, and save the file B, it takes a solid 30 minutes to save the file, and the processors are going full speed. I've tried this on 6 different machines, all running Windows XP SP3 with Office 2007 SP2 and all patches. The specs vary from one machine with 512 MB or RAM to a machine with 4 GB of RAM and quad processors. Same result every time. Here is the clincher: If I do this same save operation on a VMWare virtual machine, the file gets saved in 1 minute. I've tried this with my ESX servers at the office, my Mac Fusion at home, and VMWare workstation at the office. It does not matter how much RAM the virtual machine has... it saves in about 1 minute every time. Does anybody have any idea why this is happening and how to fix?

    Read the article

  • How can I debug user mode driver failures in Windows 8

    - by Tom
    I have a 32 GB SD Card. Whenever I insert this card in to my newly upgraded Windows 8 laptop the OS stops responding normally. Metro Apps won't work. The system may or may not log in. Desktop apps may or may not be able to do things. When I remove the card and restart then all is fine. As soon as I put the card back in, the system starts misbehaving again. I've run Windows Update, so I have the latest drivers from Microsoft. This does not occur with the 8 GB cards I have. Unfortunately I only have one 32 GB card, so I can't test with others. From examining the system event log I've determined this is happening due to a user mode driver failure. How can I best debug this issue from here? How can I figure out which driver this is related to? Will there be a Dr. Watson crash dump somewhere? Details - System - Provider [ Name] Microsoft-Windows-DriverFrameworks-UserMode [ Guid] {2E35AAEB-857F-4BEB-A418-2E6C0E54D988} EventID 10110 Version 1 Level 1 Task 64 Opcode 0 Keywords 0x2000000000000000 - TimeCreated [ SystemTime] 2012-10-29T00:51:57.532718300Z EventRecordID 40417 Correlation - Execution [ ProcessID] 1056 [ ThreadID] 3796 Channel System Computer thebrain - Security [ UserID] S-1-5-18 - UserData - UMDFHostProblem [ lifetime] {811E3DC4-FBC6-420B-ABCC-AD7505A36F3B} - Problem [ code] 3 [ detectedBy] 2 ExitCode 3 - Operation [ code] 259 Message 72448 Status 4294967295 Edit 1 So I tried using Debug View from SysInternals (you can get it here: http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx). That gave me this information: which is not especially helpful. Then I tried connecting WinDbg to WUDFHost.exe (the process that seems to host user mode drivers) to see if it could catch the error. Get it here: http://msdn.microsoft.com/en-US/windows/hardware/hh852363 Instructions: http://msdn.microsoft.com/en-US/library/windows/hardware/ff554716(v=vs.85).aspx That didn't help much. It didn't catch any exceptions as I'd hoped (which would point me to the cause of the crash at least). Here's the stack of one of the threads:

    Read the article

  • Overclock Failed

    - by John
    This morning when I tried to turn on my computer, the monitor would not display anything. The computer is on, but no UI on the monitor and no beep. I turned off the computer and turn it back on again. This time, the computer was on for about 2 seconds and automatically turned itself off for about 2 second and automatically turned itself back on. This happened two times without the monitor displaying any UI. On the third try, the computer did the same thing but when it turned itself back on, the monitor displayed this: American Megatrends Asus P8P67 Le ACPI bios Revision 1011 CPU Intel Core i5 2500K CPU @ 3.30GHz Speed: 3300MHz Total Memory 4096 MB (DDR3-1333) USB Devices Total: 0 Drive, 1 Keyboard, 1 Mouse, 2 Hubs Detected ATA/ATAPI Deivces SATA6G_2 Hitachi HDs821010CLA 332 SATA3G_1 Corsair force 3 SSD SATA3G_2 HL-DT-ST DB-RE UH12CS28 Overclocking Failed! Please enter setup to reconfigure your system. Press F1 to Run Setup. After I pressed F1, it went into the Asus EF1 Bios Utility. I couldn't do anything in there so I just exited. After which, the computer auto turned off and on again and I was able to use the PC like regular. This has never happened to this computer before and the day before, nothing seemed wrong, just regular use and some gaming.

    Read the article

  • Why is the link between my switch and my router always negotiating half-duplex mode?

    - by Massimo
    I have a Cisco 2950 switch which has one of its ports connected to an Internet router provided by my ISP; I have no access to the router configuration, but I manage the switch. If I leave all switch ports with their default setup (auto-negotiation of speed and duplex mode), this link always connects at 100 MBit/s, but in half-duplex mode. I've tried replacing the cable, and also moving the link to another switch port: the result is always the same. A different device connected to the same port (or to any switch port, really) shows no problem at all. It could be guesed that someone configured the router to only connect in half-duplex mode... BUT, here's the catch: if I manually force the switch port to full-duplex mode (duplex full in the interface configuration), the link goes up, stays up and is completely stable. So: The connection is not forced to half-duplex mode by the router, otherwise it would not connect at all if I force the switch end to full-duplex. There is no actual link problem, otherwise the full-duplex connection would not go up or would at least show some errors. But if I leave the port free to auto-negotiate, it always connects in half-duplex mode. Why?

    Read the article

  • Embedding a WMV file on the web via URL in a Powerpoint presentation

    - by Dave
    I've got a situation where I want to distribute a Powerpoint presentation to several people. I want to be able to embed several large videos in this presentation by linking to a URL, for the following specific reasons: the videos are highly confidential, and I would like to be able to delete them at some later date, but still allow them to see it in the presentation while it is online. I want to send the presentation via email (so it should be small), and put the links on a server with a faster upload speed Maybe I'd like to change the video at some point without changing the presentation One option that addresses #1 is to hook up a webcam and allow them to see video stream from the office, but our upload rate is too slow for this to be a viable option. I've tried embedding a video and giving Powerpoint the URL. It seems to work initially, because the first frame appears in my slideshow. However, when I play the slideshow, nothing happens. I looked at the network traffic on my computer, and nothing was getting downloaded from the remote server. Any suggestions on how to make this work, or how to at least satisfy the criteria listed above would be great!

    Read the article

  • 32 core (each physical core) 2.2 GhZ or 12 core (6 physical cores) 3.0GHZ?

    - by Tejaswi Rana
    I am working on a multithreaded application (Forex trading app built on C#) and had the client upgrade from the 12 core 3.0GHZ machine (Intel) to a 32 core 2.2 Ghz machine (AMD). The PassMark benchmark results were significantly higher when using multicores doing Integer, Floating and other calculations while for a single core calculation it was a bit slower than the pack (others that were being compared to with similar config as the 12 core one). Oh it also comes with 64 GB RAM (4 times as the other one) and a much faster SSD. So after configuring and running the application on that machine, not only did it not perform as well, it was significantly slower. We're talking about 30seconds - 1 minute slower on an app that usually completes processing within 5-20 secs. The application uses MAX DEGREE of PARALLELISM (TPL) which I've tried setting to number of cores and also half of that. I've also tried running single threaded and without setting any limits in parallel threading. While it may be the hardware has some issues, I am wondering if the CPU processing speed is the issue. I can overclock to 3.0 GHZ. But is that even a good idea? Server Info - AMD http://www.passmark.com/forum/showthread.php?4013-AMD-Dual-6272-performance-is-60-lower-than-benchmarks Seems that benchmark was wrong to start with - officially. Intel i7 3930k OS (same in both) Windows 7 Professional 64-bit

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • VMware Workstation reboot 32-bit host when starting 64-bit guest

    - by Powerman
    I'm trying to start 64-bit guest (MacOSX and Windows7) on 32-bit host (Hardened Gentoo Linux, kernel 2.6.28-hardened-r9) using VMware Workstation (6.5.3.185404 and 7.0.1.227600). If VT-X disabled in BIOS, VMware refuse to start 64-bit guest (as expected). If VT-X enabled in BIOS, VMware start guest without complaining, but then, in about a second (I suppose as soon as guest try to switch on 64-bit) my host reboots (actually, it's more like reset - normal reboot procedure skipped and BIOS POST start immediately). My hardware is Core 2 Duo 6600 on ASUS P5B-Deluxe with latest stable BIOS 1101. I've power-cycled system, then enabled Vanderpool in BIOS. My CPU doesn't support Trusted Execution Technology, and there no way to disable it in BIOS. I've rebooted several times after that, sometimes with power-cycled, and ensure Vandertool is enabled in BIOS. I've also run VMware-guest64check-5.5.0-18463 tool, and it report "This host is capable of running a 64-bit guest operating system under this VMware product.". About a year ago I tried to disable hardened in kernel to ensure this isn't because of PaX/GrSecurity, but that doesn't help. I have not checked 32-bit guests with VT-X enabled yet, but without VT-X they works ok. ASUS provide "beta" BIOS updates, but according to their descriptions these updates doesn't fix this issue, so I'm not sure is it good idea to try it. My best guess now it's motherboard/BIOS bug. Any ideas?

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • httpd.config Easy Apache WHM CentOS

    - by jessie
    First let me explain how I got to this situation. I run a Streaming Video Site. Videos are about 100-250MB in size at any given time there are 500 people on the site. So I guess that would make then static. Recently My site started getting really slow and the only way to fix it temporarily was to restart apache. Now there was no change in traffic that could have caused this. My site is not being attacked. My hosting company recomended to implement mpm_mod and suPHP. They did that by using Easy Apache in WHMS. Then everything was working fine but a little slow. I researched around and to my understanding that mpm will do that but be more table. I was told that installing FastCGI would speed things up just enough. Well that made everything worse. The site is slow and time's out. I used WHM and took off fastCGI but its still the same, it seems like everything i do as of now nothing changes. I even did a roll back on the htconfig file but that didn't work. I'm not sure how to fix this. and my hosting network guy wont be able to touch the problem until Tuesday. I have root access.

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

  • Unable to connect to CopSSH when running Windows service, works when running sshd directly

    - by Joe Enos
    I've been using CopSSH (that uses OpenSSH and Cygwin, so I don't know which of the three is the problem) as my SSH server application at home on Windows 7 Ultimate 32 bit. I have used it for about a year with no real problems, other than it sometimes takes 2 or 3 connection attempts to get through, but it's always worked within a few attempts. A few days ago, it just stopped working. The Windows service is still running, and I've rebooted, restarted the service, etc. with no change. On the client (using Putty on Windows), I get the message "Software caused connection abort". On the server, my event viewer registers the following: fatal: Write failed: Socket operation on non-socket I finally got it working, but only by executing sshd.exe directly from the command line on the server. No special flags or options, just straight execution, and then when I connect remotely, it goes through. I do have firewall and anti-virus software which appears to be configured properly, but the fact that things work when running sshd.exe also indicates that the firewall is fine. I thought the service and executable did exactly the same thing, but apparently there's some difference. Does anyone have any ideas on where I should look for the problem? If I can't find something, I suppose I can write a Windows service or scheduled task that fires off sshd.exe directly and ensures that it stays running, but that's kind of a last resort, since it's just wrapping around something that should already work. I appreciate your help.

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • Are applications optimized for XenApp?

    - by damitamit
    Our IT dept are about to deploy virtualized apps using Citrix XenApp. One of these apps will be Dynamics AX 4.0 SP2, a ERP client (which I develop on). They have supposedly reached a roadblock because an external 'Dynamics AX Consultant' has told our IT Dept that Dynamics 4 will not work optimally on Citrix and will run very slow because it is not optimized for Citrix. We have it running in a Test environment now and seems ok. They have been told that the only 'solution' is to upgrade to Dynamics AX 2009 where supposedly this problem as been fixed. (not a small task for my team!) When I heard about this, I was quite surprised. From my brief knowledge of Citrix, I thought it would be application independent. How does the citrix app virtualization work, in that a particular app would work better than others on citrix? Would the speed of the virtualized app not just depend on the resources/network connection the citrix server has? FYI, Dynamics AX is a 3-tier client/server system, so the client will be accessing a AOS application server, which then accesses the database. Please enlighten me :)

    Read the article

  • Why is this APC installation failing so badly?

    - by Matt
    I have multiple instances of APC running on my server with similar configurations (albeit with different cache sizes. However, one of the instances is performing extremely poorly, and I have no idea why (100% cache fragmentation, high miss rate). The runtime settings I'm using are as follows (pretty much out of the box): apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 10M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 APC is version 3.1.6, PHP is 5.3.3-1ubuntu9.5. I've tried restarting Apache multiple times, so this isn't a freak instance. The instance with problems is simply running Wordpress with a few plugins installed. All other instances (~4) on the server are running perfectly fine with almost 100% hit rates and 0% fragmentation; for example this instance is holding a website built using the Symfony framework. Any help would be much appreciated; I haven't had much experience with APC and was hoping for it to be an out-of-the-box speed boost ;).

    Read the article

  • how to design pound -> varnish -> jboss for ha + loadbalancing

    - by andreash
    Hello, I'm planning a new infrastructure for our web application. We have two JBossAS5 servers, running in a cluster. Session state will be replicated via JBoss Cache. In front of that, there should be some cache, to speed up delivery of static elements. However, most of the traffic to our app will be via HTTPS. So far, I had been thinking of two Varnish caches in front of the JBossASs, each being configured for loadbalancing to the two JBossASs via round-robin. Since Varnish doesn't handle HTTPS, then there would need to be two pound proxies in front of the Varnishs, dealing with the HTTPS. The two pounds would be made high-available with Heartbeat/LinuxHA. The traffic to www.example.com would then be going through our firewall, from there to the virtual IP of the pounds, from there to the Varnishs, and from there to the JBossASs. Question 1: Does this make sense? Or is it overly complicated, and the same goal can be reached with simpler methods? Question 2: If my layout is fine, how do I configure the pound - Varnish step? Should I a) make the Varnish service high-available through Heartbeat/LinuxHA as well and direct traffic from pound to the virtual IP of the Varnishs, or should I rather b) Configure two independent Varnishs and use load-balancing in pound to address the different Varnishs? Thanks a lot for your insight! Andreas.

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • Windows 7 network performance tuning for LAN

    - by Hubert Kario
    I want to tune Windows 7 TCP stack for speed in a LAN environment. Bit of background info: I've got a Citrix XenServer set up with Windows 2008R2, Windows 7 and Debian Lenny with Citrix kernel, Windows machines have Tools installed the iperf server process is running on different host, also Debian Lenny. The servers are otherwise idle, tests were repeated few times to confirm results. While testing with iperf 2008R2 can achieve around 600-700Mbps with no tuning what so ever but I can't find any guide or set of parameters that will make Windows 7 achieve anything over 150Mbps with no change in TCP window size using -w parameter to iperf. I tried using netsh autotuining to disabled, experimental, normal and highlyrestricted - no change. Changing congestionprovider doesn't do anything, just as rss and chimney. Setting all the available settings to same values as on Windows 2008R2 host doesn't help. To summarize: Windows 2008R2 default settings: 600-700Mbps Debian, default settings: 600Mbps Windows 7 default settings: 120Mbps Windows 7 default, iperf -w 65536: 400-500Mbps While the missing 400Mbps in performance I blame on crappy Realtek NIC in the XenServer host (I can do ~980Mbps from my laptop to the iperf server) it doesn't explain why Windows 7 can't achieve good performance without manually tuning window size at the application level. So, how to tune Windows 7?

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >