Search Results

Search found 448 results on 18 pages for 'memcached'.

Page 8/18 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Consistant hashing with memcache

    - by Industrial
    Hi everyone, I am setting up a new web app that will on the client side feature a multi-memcached server environment for reliability and performance. Would it be wise for us to utilize something like Flexihash to make it better to replicate the data between the memcache servers? Reference: http://github.com/pda/flexihash Thanks!

    Read the article

  • Issues Installing PHP Memcache module

    - by smith
    I am trying to install memcache on my VPS. When I type $ pecl install memcache I get this error checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/memcache/configure --enable-memcache-session=yes' failed Any ideas what the issue could be?

    Read the article

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • memcache fast-cgi php apache 2.2 windows 7 creating problems

    - by Ahmad
    hi, i am trying to run memcache, fast-cgi with apache 2.2 + php on a windows 7 machine. if i dont use memcache everything works fine. the moment i disable extension=php_memcache.dll in php.ini everything returns to normal. once i start apache, the apache logs say: [Wed Jan 12 18:19:23 2011] [notice] Apache/2.2.17 (Win32) mod_fcgid/2.3.6 configured -- resuming normal operations [Wed Jan 12 18:19:23 2011] [notice] Server built: Oct 18 2010 01:58:12 [Wed Jan 12 18:19:23 2011] [notice] Parent: Created child process 412 [Wed Jan 12 18:19:23 2011] [notice] Child 412: Child process is running [Wed Jan 12 18:19:23 2011] [notice] Child 412: Acquired the start mutex. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting 64 worker threads. [Wed Jan 12 18:19:23 2011] [notice] Child 412: Starting thread to listen on port 80. and after accessing the page [the page just has echo phpinfo()]. i get this error in the error.log [Wed Jan 12 18:20:54 2011] [warn] [client 127.0.0.1] (OS 109)The pipe has been ended. : mod_fcgid: get overlap result error [Wed Jan 12 18:20:54 2011] [error] [client 127.0.0.1] Premature end of script headers: index.php i have php_memcache.dll in my ext directory and httpd.conf is like this: LoadModule fcgid_module modules/mod_fcgid.so FcgidInitialEnv PHPRC "c:/php" FcgidInitialEnv PATH "c:/php;C:/WINDOWS/system32;C:/WINDOWS;C:/WINDOWS/System32/Wbem;" FcgidInitialEnv SystemRoot "C:/Windows" FcgidInitialEnv SystemDrive "C:" FcgidInitialEnv TEMP "C:/WINDOWS/Temp" FcgidInitialEnv TMP "C:/WINDOWS/Temp" FcgidInitialEnv windir "C:/WINDOWS" FcgidIOTimeout 64 FcgidConnectTimeout 32 FcgidMaxRequestsPerProcess 500 <Files ~ "\.php$>" AddHandler fcgid-script .php FcgidWrapper "c:/php/php-cgi.exe" .php </Files> so the problem has to be related to memcache coz if i disable it, fast-cgi seems to be working fine. any possible reasons for this?? the memcache service is running.. i can check it through control panel-services

    Read the article

  • 4 Magento Requests per second = 210 mbit memcache bandwith?

    - by Karsten
    After searching serverfault for similar questions without success, these are my numbers for one magento instance, running on multiple servers: After varnish about 4 requests per second hit the webservers The magento cache is configured to use one separate memcache server where I'm measuring about 210 Mbit/s bandwith usage. Compared to other projects, magento and non-magento, this number seems way off (as in extremely high). I'd like to get some data to compare to, or even better, if you have any idea what exactly causes this/how to find it and how to improve the situation.

    Read the article

  • Unable to load memcache.so extension ??

    - by billyduc
    I built php from source with configure command : './configure' '--prefix=/usr/local/php-5.2.8' '--with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d' '--with-apxs2=/usr/local/httpd/bin/apxs' '--with-mysql=/usr/local/mysql/' '--with-zlib' I installed php memcache extension : wget http://pecl.php.net/get/memcache tar -zxvf memcache-2.2.5.tgz cd memcache-2.2.5 phpize ./configure --enable-memcache make make install I add to my /usr/local/lib/php.in extension=memcache.so Rebooted my apache and run php-m but php seem doesn't load memcache extension I followed this solution from this site http://www.howtoforge.com/forums/showthread.php?t=26554 I added full path extension=/usr/local/lib/php/extensions/no-debug-non-zts-20060613/memcache.so rebooted apache But it didn't load memcache extension ! I google around but the same issue ! How can I load this extension _ _"

    Read the article

  • What causes memcache to delete keys?

    - by Arkaaito
    Our memcache install recently started removing keys, and we're not sure why. Large groups of keys vanish at the same time. Memcache reports that evictions are low to non-existent, and our app has no way to clear memcache (it can only delete specific keys). Even keys of which the app has no knowledge get deleted, so we're pretty convinced they're getting expired. However, our memcache configuration hasn't been touched in some time. Has anyone debugged an issue like this before, and if so, are there any steps you'd recommend we take? How flexible is memcache's expiration policy - is it possible that we're suddenly running into a criterion based on (say) write frequency to a key?

    Read the article

  • Apache Server with memcache, varnish and php slow request times

    - by coolestdude1
    My issue is that these servers are taking rather long for request about 2 seconds on average just to serve files. When we had just one server doing everything it was noticeably faster even with the same web app (Drupal 6 and Drupal 7). I want to get this number down to a reasonable level and so I need some help getting to the bottom of why the request times are so slow. This can cause the webapp to hang on post or put and generally leads to a bad user experience on my sites. PS: I am more of a server newbie so this has confounded me for quite some time. The domains: collabornation.net nptrainingworks.com (they run off the same two webservers using vhost configs) The Gear: Two Rackspace 4 Gig servers running CentOS 6.2 Final They have a mounted file system (gluster) that is used to keep files the same on both machines. They are behind a rackspace load balancer running round robin. Mysql is run using php-pdo and php-mysql as such mysql is run on another instance running memcache on that machine with phpMyAdmin located there as well. Apache version number 2.2.15-15.el6.centos.1 (httpd.x86_64) Varnish version number 3.0.2-1.el5 (varnish.x86_64) PHP version number 5.3.14-1.el6.remi (php.x86_64) Configs Linked Below Apache Conf Vhost Conf Varnish Backends Varnish Defaults Varnish Acl PHP INI Again need some help, much appreciated!

    Read the article

  • How to optimally configure memcache running on 16 cores 144G ram server?

    - by Ivko Maksimovic
    Memcache is the only important app running on the server Server has 16 cores and 144G RAM Memcache is given 135G Memcache runs at 32 threads Gigabit network, test shows at least 300Mbit/s availability on network port 600 connections 3000 requests per second Say that memcache (memory) usage is at 50% - it's definitely not full As we increase number of requests towards server, requests slow down (from 8ms to 100ms per request) but server load remains 0.00. We suspect this can be solved by adjusting configuration but we don't understand many of the configuration parameters (besides, maybe, the number of threads). Any ideas?

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • Memcache localhost connection oddity

    - by MatW
    When I try to connect to memcache using this code: $memcache = new Memcache; $memcache->connect('localhost', 11211) or die ("Could not connect"); The call dies with the "Could not connect" error, but if I use localhost's IP: $memcache = new Memcache; $memcache->connect('127.0.0.1', 11211) or die ("Could not connect"); It works! So what's my problem? Well, this new computer is the only development environment I've setup that's been sensitive to that difference. I'm not about to go changing the settings on any code for what seems to be a computer specific issue, but I can't figure out what could be causing this behaviour... Any ideas? I'm running XP, memcached 1.2.4, and wampserver 2. I've checked the hosts file and it does have an entry for localhost.

    Read the article

  • Memcache failover and consistent hashing

    - by Industrial
    Hi everyone, I am trying to work out a good way to handle offline/down memcached servers in my current web application that are built with PHP. I just found this link that shows an approach on how to do what I want, I think: http://cmunezero.com/2008/08/11/consistent-memcache-hashing-and-failover-with-php/ Anyhow, it gets me confused when I start working with it and reading the PHP documention about failover with memcache. Why is offline memcache servers added to the $realInstance server pool together with the online servers? Reading the memcache documentation confuses me even more: http://www.php.net/manual/en/memcache.addserver.php status Controls if the server should be flagged as online. Setting this parameter to FALSE and retry_interval to -1 allows a failed server to be kept in the pool so as not to affect the key distribution algorithm. Requests for this server will then failover or fail immediately depending on the memcache.allow_failover setting. Default to TRUE, meaning the server should be considered online. Thanks,

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Should I uniform different API calls to a single format before caching?

    - by bluedaniel
    The problem is that I am recreating the social media widget/icons on http://about.me/bluedaniel (thats me). Anyhow there can be up to 6 or 7 different API calls on the page and obviously I am caching them, at the moment with Memcached. The question is, as they arrive in various formats and sizes (fb-json, linkedin-xml, wordpress-rss etc), should I universally format/convert them before storing it in the cache. At present I have recreated the html widget and then stored that, but I worry about saving huge blocks of html in the cache as it doesn't seem that smart.

    Read the article

  • Help regarding no sql databases like hadoop, hbase etc

    - by user560370
    I am new to the distributed NoSQL databases like Hadoop, Cassandra, etc. I have few questions for which I seek an expert advice: Can you list problems/challenges one will generally face when making a shift from the present conventional database like MySQL to these large cluster-based databases? What are the difficulties, if any, when one needs to adapt to a newer version of these open source projects? Can you list out the things which are generally stored/kept in memcached for fast rendering of the page? How can I understand the source code of open-source projects so that I can build on it and maybe give back to the community? Above questions may sound to be idiotic and basic but please it's a request for the experts to answer the above questions in detailed and to best of their abilities.

    Read the article

  • Caching Models in rails

    - by jules
    I have a rails application, with a model that is a kind of repository. The records stored in the DB for that model are (almost) never changed, but are read all the time. Also there is not a lot of them. I would like to store these records in cache, in a generic way. I would like to do something like acts_as_cached, but here are the issue I have: I can not find a decent documentation for acts as cached (neither can I find it's repository) I don't want to use memcached, but something simpler (static variable, or something like that). Do you have any idea of what gems I could use to do that ? Thanks

    Read the article

  • An efficient way to store view counts for objects?

    - by Nick Brooks
    I maintain an application where users are able to store images, and then share them. The system is powered by MongoDB at the back end. Most of the image depiction pages are cached as flat HTML files, but I can run some code just before loading the file. I've decided to implement a view count for the system. I am wondering what is the best storage place for that. It should be like Memcached but it should save the viewcounts every hour or so, so even if our server has to be restarted we won't lose the view counts. What is the best solution for that (preferably with a PHP extension as a client)?

    Read the article

  • Minimizing calls to database in rails

    - by ming yeow
    Hi guys, i am familiar with memcached and eager loading, but neither seems to solve the problem i am facing. My main performance lag comes from hundreds of data retrieval calls from the database. The tricky thing is that I do not know which set of users i need to retrieve until i have several steps of computation. I can refactor my code, but i was wondering how you experts handle this situation? I think it should be a fairly common situation def newsfeed - find out which users i need - retrieve those users via DB - find out which events happened for these users - for each of those events - retrieve new set of users - find out which groups are relevant - for each of those groups - retrieve new set of users - etc, etc end

    Read the article

  • Blank graph for some munin plugins

    - by jack
    I have a munin-master and munin-node installed on same server (Ubuntu 9.10 server). Most pre-installed plugins work well but the following plugins are with blank graph: Memcached bytes used Memcached connections Memcached cache hits and misses Memcached cached items Memcached requests Memcached network traffic MySQL Queries Cache Size I run the following 3 script in terminal and results were ok. /etc/munin/plugins/memcached_bytes /etc/munin/plugins/memcached_counters /etc/munin/plugins/memcached_rates But when I tried the command below after "telnet localhost 4949" fetch memcached_bytes # Unknown service etch memcached_bytes_ # timeout pid 28009 - killing...done Does anyone know the reason?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >