Search Results

Search found 6598 results on 264 pages for 'opcode cache'.

Page 170/264 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • So confused by these CPU Specs can someone please help me out? THanks!

    - by Kevin
    Intel® Core™ i7-640M (2.8~3.46GHz, 35W) w/4MB Cache - 2 Cores, 4 Threads - 2.5 GT/s SO i'm buying a new laptop, which i have not done in 6 years. So i am not familiar with any of these cpu specs. It was the highest option for intel for this laptop. So i am assuming it is somewhat fast. But i'd like to learn what these specs mean. Any help would be greatly appreciated. i am not really a computer guy but would love to learn about what I am buying. Thanks!

    Read the article

  • IIS doesn't respond to 127.0.0.1 (external IP works fine)

    - by Jordan
    I have an AWS web server - call it box.company.com. It's running IIS and if I visit http://box.company.com in a web browser (from any machine, including box.company.com), it responds correctly with our site. However, if I visit localhost/ or 127.0.0.1/ when I'm logged into box.company.com, I get a "couldn't connect to host" message. The hosts file has only one entry - the standard "127.0.0.1 localhost" line. Pinging 127.0.0.1 works fine. Pinging localhost correctly resolves to 127.0.0.1 and works fine. I've tried restarting IIS and restarting the DNS Cache. I had this problem once before, and restarting the server fixed it, but I'd like to know what's going on in case this happens again in the future.

    Read the article

  • MySQL Optimizing

    - by Thoman
    Hello My web use an dedicated Intel(R) Xeon(R) CPU E5620 8core 12Gram Centos32bit/Driectadmin DISK SAS 80G Php-cgi This dedicated running one website Use wordpress 2.92(+plugin cache...) Database size 600MB only 100online But mywebsite runing very snow. please hep me config file my.cnf [mysqld] user=mysql key_buffer=128M set-variable = max_connections=1000 socket = /var/lib/mysql/mysql.sock key_buffer =32M table_cache = 1024 open_files_limit = 16344 join_buffer_size = 8M read_buffer_size = 8M sort_buffer_size = 8M tmp_table_size=512M read_rnd_buffer_size=8M max_heap_table_size=256M #myisam_sort_buffer_size=256M thread_cache_size=8 thread_cache=32 query_cache_type=1 query_cache_limit=1024M query_cache_size=1024M thread_concurrency = 16 wait_timeout = 10 connect_timeout = 10 interactive_timeout = 10 long_query_time=1 log-slow-queries = /var/log/mysqlslowqueries.log max_allowed_packet=32M skip-innodb [myisamchk] key_buffer = 64M sort_buffer = 64M read_buffer = 16M write_buffer = 16M [isamchk] key_buffer=64M sort_buffer=64M read_buffer=16M write_buffer=16M And apache

    Read the article

  • Nginx issue with two web nodes

    - by HTF
    I'm running Wordpress website with Nginx and Memcached. I have simple DNS round robin balancing with A records pointing to both web servers. I've noticed the following entries in both web servers access logs: 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 192.168.1.10 example.com - [07/Jun/2012:22:43:58 +0100] "-" 400 0 "-" "-" - 0.000 I've configured W3 Total cache plugin for Wordpress - pointing to loopback address (127.0.0.1:11211) on each Wordpress installation. Is this because the webserver is trying to access content that is cached on the other web server? Shall I add IPs to W3 plugin of both web servers on each website (192.168.1.:11211, 192.168.1.2:11211)? I'm not sure if this related to Memcached or maybe some configuration issue on the server itself? Regards

    Read the article

  • How does the performance of pure Nginx compare to cpNginx?

    - by jb510
    There is now a Cpanel plugin to fairly easily setup Nginx as a reverse proxy on a Cpanel/Apache server. I've been simultaneously interested in setting up my first unmanaged VPS and my first Nginx server and as a masochist figured why not combine the two. I'm wondering however if it's worth setting up a pure Nginx server vs trying out cpNginx on Apache? My goal is solely to host WordPress sites and while what I've read raves about Nginx's is exceptional ability serving static at least as a reverse proxy, I am unclear if there is substantial benefit to running a pure nginx with eAccelorator over cpNginx on Apache for dynamic sites? Regardless I'll be running W3TC on all sites to cache content, but am still interested if there are big CPU reductions running PHP scripts under pure Nginx over cpNginx?

    Read the article

  • Do I really need to reboot for AD changes to be applied?

    - by stimms
    Every time I request a permission change the IT group at my company instructs me to wait 20 minutes and reboot the computer. I cannot believe that in this day and age you still need to reboot the computer to clear whatever cache stores the permissions locally. It feels like something out of the NT 4 days. Do you actually still need to reboot the computer? Is a logout/login sufficient? Is there still a long time(20 minutes) for the changes to propagate through the AD tree?

    Read the article

  • Java Applets fail to launch in Windows 7 64bit

    - by Steve
    Hi can you help? I'm having great trouble launching Java applets using Windows 7 64 bit. I have the 32 bit version of Java installed (as recommended, for use with 32 bit browsers). When I click on an Java applet to open it the Java logo shows with a circular progress bar, but the process goes no further, just the progress bar going round and round. There are no error messages. The computer is a few months old and Java has never worked properly. I've tried with both IE9 and Chrome, followed the suggested fixes, uninstall then reinstall, check certain browser settings (all were in order), check Java control panel settings (again all was in order) delete the internet cache etc, tried 32bit & 64bit Java side by side. All to no avail. I'm baffled Is there anything else to try, or do I have to accept a computer without Java (Very inconvenient!) Thanks

    Read the article

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • Is a memory upgrade a viable option to fix performance issues? [closed]

    - by ratchet freak
    I'm currently seeing my PC getting bogged down by Firefox 11.0 alone with only one hundred tabs open. Resulting in a memory use of over 530M , VM size of over 800M and an insane amount of page faults (easily reaching 100 million over the course of the day). The PF delta during normal operation easily reaches 7k with peaks to 15k sometimes reaching over 20k. This leads to a (real) deterioration to response time when switching, opening and closing tabs, opening menus, typing, ... My question is: Am I right in assuming that plugging in more RAM (either adding 2x1GB or replacing the existing RAM with 2x2GB or 4x1GB) will solve this problem? My specs: Windows XP Home Edition SP3 (32 bit) Intel Core Duo 2,4 GHz 2x512MB RAM 800MHz DDR2 (dual channel) 4MB unified cache 320GB HDD Intel G33 (X3100) onboard graphics (no graphics card but PCI express x16 slot is available)

    Read the article

  • Boot Loop contiues [closed]

    - by user1894750
    I am facing boot loop. My backend Linux works flawlessly, But the Zygote and System server does not workout. I can still use ADB and LIVE cat .. ps show that both Zygote and system_server process are there, But boot animation remains forever !! I have : 1 Wiped Data + Cache Partition 2 System Partition has been RESTORED.. There is NO PERMISSION problem... I think that there is *problem with Zygote and System_server ..* my device: Karbonn A9+ Dual Core Snap Dragon 1.2 Ghz Ram 386MB OS : ICS 4.04 Any suggestions ??

    Read the article

  • How many xml http requests is too much for a pc to handle?

    - by Uri
    I'm running mediawiki on an apache on a regular pc running vista (don't know the specific specs, but I'm guessing at least duo core 2 2 giga hertz processor, broadband connection (500 kb/s at least, probably 1 mega). I want to use the MediaWiki api to send a lot of requests to this server. Most of the time the requests will be sent through LAN (but sometimes through the internet). I'm talking thousands of requests every few seconds at worst case. (A lot of these requests may repeat themselves, I guess some sort of cache would help) Will the server handle this, or do I need a stronger/dedicated computer? (I'm not looking for specific yes/no, but just want to get an idea as to what configuration of computer will support how many request per second) Thanks

    Read the article

  • Inconsistent slow DB connect times

    - by mcryan
    I have an app that shows significantly increased load times every 5 minutes. We use New Relic and I've been able to identify that every slow request is due to our DB connect functions. I've looked through our frontend servers and db servers and have ensured that there are no cronjobs running every 5 minutes - though the increase comes every 5 minutes without fail. I feel like I've tried everything I can to isolate this but I'm not having any luck. It is not coming from a cronjob, there is no cache expiring every 5 minutes, etc. What else could I check, and does anyone know of anything that could cause this sort of behavior? For what it's worth - our stack includes Apache, PHP, MySQL, Varnish and Memcache. SQL and memcache are running on dedicated servers and we have a number of frontend servers behind a load balancer, each running Apache and Varnish. Spike every 5 minutes: And it's always db connect (in red) taking up all of the time:

    Read the article

  • Squid gives always tcp_miss reverse proxy

    - by JaakL
    I added installed latest squid3 in front of apache as reverse proxy. The problem is that it gives always tcp_miss, in fact I have not yet found a single TCP_HIT message in the log file, and most of the content is static. Relevant config values for cache_dir and refresh_pattern are default ones, directory /var/spool/squid3 exists and has some files/folders. I have 100+G free storage, but reconfigure gives warning "WARNING cache_mem is larger than total disk cache space!", which does not make any sense to me. I have googled a lot and seen with similar problems, but none of them has helped.

    Read the article

  • Storage sizing for virtual machines

    - by njo
    I am currently doing research to determine the consolidation ratio my company could expect should we start using a virtualization platform. I find myself continually running into a dead end when researching how to translate observed performance (weeks of perfmon data) to hdd array requirements for a virtualization server. I am familiar with the concept of IOPs, but they seem to be an overly simplistic measurement that fails to take into account cache, write combining, etc. Is there a seminal work on storage array performance analysis that I'm missing? This seems like an area where hearsay and 'black magic' have taken over for cold, hard fact.

    Read the article

  • Auto-detect proxy settings for the network

    - by user42891
    Firefox browser contains network settings under Tools--Options--Advanced--Network--Settings and there is an option to do auto detect proxy settings, how should I enable this? Currently this is manually configured and its possible for users to bypass and use the internet directly. We use a variety of browsers (firefox, IE, chrome, safari, opera) on win xp, win 2003, win vista machines. How should I enable this so that the end user cannot manipulate the settings on his browser to by pass security. I have configured a squid cache proxy server for this purpose.

    Read the article

  • uploading files greater than 1MB = connection resets

    - by Legit
    I'm using nginx on the frontend as "proxy cache" and apache on the backend, i've set my PHP settings to the following: error_log = /var/www/site1/php_error.log error_reporting = 22527 file_uploads = On log_errors = On max_execution_time = 0 max_file_uploads = 20 max_input_time = -1 memory_limit = 512M post_max_size = 0 upload_max_filesize = 1000M What's the problem? Uploading files less than 1MB is successful but anything greater than that, Google Chrome outputs: Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I already checked for the error log file but it doesn't exist in the directory. I also checked /var/log/httpd/error_log but no uploading related problems. I don't know anything else which might have caused the problem so I have reached out for your helping hand. Thanks!

    Read the article

  • Can I rent exclusive time on a powerful server running linux? [closed]

    - by Mark Borgerding
    My company is involved in a proposal that requires speed estimates of our software on a server with the latest & greatest processors. This is not the first time we've been in this situation. The servers themselves are too expensive to buy a new one every time, so we end up extrapolating from what we have. There are so many variables: processor generation & speed, memory speed, memory channels, cache configurations; it makes extrapolation difficult and error-prone. Is there a business that rents time on the newest servers? At least part of the time we'd need exclusive access to an otherwise quiescent system either via ssh shell access or unattended batch jobs. I am not looking for general cloud computing services. I don't need much time on the server, but it needs to be exclusive. And the server needs to be pretty cutting edge for a solid basis of estimate.

    Read the article

  • VMware player unity with ubuntu 11.10 guest on W7 host - empty applications start menu

    - by lexalizer
    I am trying to run Ubuntu 11.10 as a guest on Windows 7. When I enter into unity mode, the menu for the guest os, next to the Windows start menu is empty. I have searched around the web for a fix for this, but there doesn't seem to be anything that works. I have tried restarting the guest os several times, I am running VMware player as an admin, but the guest start menu in unity mode is still empty. I have tried clearing the VM cache. I have installed all the Ubuntu updates and the VMware tools. Has anyone had this problem?

    Read the article

  • What if any free video player for windows can I set a custom frame buffer for streaming over the wifi?

    - by user268883
    I currently use DAUM PLayer (Pot Player) but I have problems streaming video to my laptop unless it is plugged into the network. An easy fix (so I thought) would be to find a player were I could adjust the cache / buffer so say 5 mins+ is read to the local Hdrive and then played from there... I can not work out how to do this in any player I have VLC, MPCHC and DAUM installed. All I want to do is increase the file buffer so X amount of the networkfile is copied to the local drive and then played... so all stuttering is stopped? How do I do this?

    Read the article

  • Has anyone seen an HTTP 500 error when HTTPS traffic going through Pound Proxy forwards to an HTTP page?

    - by scientastic
    We have Varnish as our load balancer and reverse proxy cache for normal HTTP traffic. For HTTPS traffic, we use Pound proxy to unwrap the SSL and forward to Varnish, which then forwards to the back-end servers. This is used for our "checkout" process to encrypt credit card info in transition. However, on the last stage of checkout, users are always getting an HTTP 500 (Internal Server) error. It doesn't seem to be due to our back-end app server, by all tests I've tried. Does anyone know anything about how that transition works-- the transition back from HTTPS to HTTP and the interaction between Pound and Varnish-- and why it might cause 500 errors?

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Missing Desktop Icon Labels in Windows 7

    - by Buzzedword
    Hey guys. Big problem that has been bothering me to no end. My desktop icons have been stripped of their icon labels, and nothing seems to be recovering them. To be clear, when I attempt to rename, the text shows up, and when I view my desktop in an explorer window, all text is preserved. System restore to a stock state will not recover. No changes have been done to the computer-- no installs or downloads for two weeks prior to this error. Rebuilt the icon cache, still no response. Anybody know what can be causing this problem? Screenie below. Thanks in advance. OS: Windows 7 Enterprise 64-bit Profile: Local, non-roaming Image: http://img688.imageshack.us/img688/5152/capturemr.png

    Read the article

  • How Do I enable Safe Asynchronous Write With NFS?

    - by Joe Swanson
    The NFSv3 documentation talks alot about the concept of "safe asynchronous writes" (last bullet of A1): http://nfs.sourceforge.net/#section_a This is NOT referring to the sync/async option in the server exports file (as the async option in the exports file is NOT safe). As I understand it, safe asynchronous writes is a hybrid between the sync/async exports option. It allows for a server to reply back without flushing to stable storage immediately, but the client will not remove the write request from cache until it has received confirmation that it has been committed to stable storage (and also detects if the server looses power/reboots). I believe that this option is set on the client side, but I have not come across any documentation that shows how to do this. Any ideas?

    Read the article

  • nginx serves broken characters

    - by Andrew123321
    I have nginx 1.2.0-1 on debian 6.0.5. I have an empty file (let's say test.css). I add A and newline, works fine, B and newline, works fine,... D, works fine. I add E and I got three broken characters ("???"). When the file has got more content nginx would usually "cache" it and then output the cached file minus some characters plus some of these broken characters. Do you have any idea how to deal with this? Thank you

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >