Search Results

Search found 13956 results on 559 pages for 'python memcached'.

Page 224/559 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Memcache won't flush or clear memory

    - by pedalpete
    I've been trying to clear my memcache as I'm noticing the storage taking up almost 30% of server memory when using ps -aux. So I ran the following php code. $memcache = new Memcache; $memcache-connect("localhost",11211); $memcache-flush(); print_r($memcache-getStats()); This results in the output of ( [pid] = 4936 [uptime] = 27318915 [time] = 1255318611 [version] = 1.2.2 [pointer_size] = 64 [rusage_user] = 9.659531 [rusage_system] = 49.770433 [curr_items] = 57864 [total_items] = 128246 [bytes] = 1931734247 [curr_connections] = 1 [total_connections] = 128488 [connection_structures] = 17 [cmd_get] = 170288 [cmd_set] = 128246 [get_hits] = 45464 [get_misses] = 124824 [evictions] = 1009 [bytes_read] = 5607431213 [bytes_written] = 1806543589 [limit_maxbytes] = 2147483648 [threads] = 1 ) This should be fairly basic, but clearly, I'm missing something.

    Read the article

  • How to install MySQLdb package? (ImportError: No module named setuptools)

    - by Verrtex
    Hi All, I am trying to install MySQLdb package. I found the source code here. I did the following: gunzip MySQL-python-1.2.3c1.tar.gz tar xvf MySQL-python-1.2.3c1.tar cd MySQL-python-1.2.3c1 python setup.py build As the result I got the following: Traceback (most recent call last): File "setup.py", line 5, in ? from setuptools import setup, Extension ImportError: No module named setuptools Does anybody knows how to solve this problem? By the way, if I am able to do the described step, I will need to do the following: sudo python setup.py install And I have no system-administrator-rights. Do I still have a chance to install MySQLdb? Thank you.

    Read the article

  • Is ruby ||= intelligent?

    - by brad
    I have a question regarding the ||= statement in ruby and this is of particular interest to me as I'm using it to write to memcache. What I'm wondering is, does ||= check the receiver first to see if it's set before calling that setter, or is it literally an alias to x = x || y This wouldn't really matter in the case of a normal variable but using something like: CACHE[:some_key] ||= "Some String" could possibly do a memcache write which is more expensive than a simple variable set. I couldn't find anything about ||= in the ruby api oddly enough so I haven't been able to answer this myself. Of course I know that: CACHE[:some_key] = "Some String" if CACHE[:some_key].nil? would achieve this, I'm just looking for the most terse syntax.

    Read the article

  • Denormalization of large text?

    - by tesmar
    If I have large articles that need to be stored in a database, each associated with many tables would a NoSQL option help? Should I copy the 1000 char articles over multiple "buckets", duplicating them each time they are related to a bucket or should I use a normalized MySQL DB with lots of Memcache?

    Read the article

  • How to calculate real-time stats?

    - by Diego Jancic
    I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour". The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html The simplest approach would be to do a select like this: select count(distinct user_id) from logs where date>='20120601 1200' and date <='20120601 1300' (of course other conditions could apply for the stats, like log-ins per country) Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed. How would you summarize the data? What should go to the (mem)cache? EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • How to get repository for core-plot

    - by Omar
    I am not able to get the repository for core-plot. What I am doing is that I am typing this in the terminal: hg clone https://core-plot.googlecode.com/hg/ core-plot and this is what I get: Traceback (most recent call last): File "/usr/local/bin/hg", line 25, in mercurial.util.set_binary(fp) File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 75, in __getattribute__ self._load() File "/Library/Python/2.5/site-packages/mercurial/demandimport.py", line 47, in _load mod = _origimport(head, globals, locals) File "/Library/Python/2.5/site-packages/mercurial/util.py", line 93, in _encoding = locale.getlocale()[1] File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 460, in getlocale return _parse_localename(localename) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/locale.py", line 373, in _parse_localename raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: UTF-8 I can't seem to get it to install. Please give me guidance on how to install the repository.

    Read the article

  • Mongrel_rails can't find memcache-client

    - by tonisep
    We started to use memcache-client in our rails app and it works just fine with "script/server" but "mongrel_rails start" fails with an error. In environment.rb we define "memcache-client" and version "1.8.1". Gem list shows that the gem is installed: memcache-client (1.8.1). If run with "script/server" everything works but with "mongrel_rails start" it fails with error: no such file to load -- memcache-client Any advice what could be wrong here? Is there something different in the way mongrel_rails loads the gems compared to script/server? Or is my setup just broken?

    Read the article

  • Observer not clearing cache in Rails 2.3.2 - please help.

    - by Jason
    Hi, We are using Rails 2.3.2, Ruby 1.8 & memcache. In my Posts controller I have: cache_sweeper Company::Caching::Sweepers::PostSweeper, :only => [:save_post] I have created the following module: module Company module Caching module Sweepers class PostSweeper < ActionController::Caching::Sweeper observe Post def after_save(post) Rails.cache.delete("post_" + post.permalink) end end end end end but when the save_post method is invoked, the cache is never deleted. Just hoping someone can see what I am doing wrong here. Thanks.

    Read the article

  • AppEngine: how do cursors work?

    - by victor a.k.a. python for ever
    hello, i have the following code def get(self): date = datetime.date.today() loc_query = Location.all() last_cursor = memcache.get('location_cursor') if last_cursor: loc_query.with_cursor(last_cursor) loc_result = loc_query.fetch(1) for loc in loc_result: self.record(loc, date) taskqueue.add( url='/task/query/simplegeo', params={'date':date, 'locid':loc.key().id()} ) if len(loc_result): memcache.add('location_cursor', loc_query.cursor()) taskqueue.add(url='/task/count/', method='GET') else: memcache.add('location_cursor', None) i don't know what i'm doing wrong, but i am getting the same cursor which is not the effect i wanted. why isn't the cursor moving?

    Read the article

  • Windows equivalent to this Makefile

    - by Sridhar Ratnakumar
    The advantage of writing a Makefile is that "make" is generally assumed to be present on the various Unices (Linux and Mac primarily). Now I have the following Makefile: PYTHON := python all: e installdeps e: virtualenv --distribute --python=${PYTHON} e installdeps: e/bin/python setup.py develop clean: rm -rf e As you can see this Makefile uses simple targets and variable substitution. Can this be achieved on Windows? By that mean - without having to install external tools (like cygwin make); perhaps make.cmd? Typing "make installdeps" for instance, should work both on Unix and Windows.

    Read the article

  • Committed JDO writes do not apply on local GAE HRD, or possibly reused transaction

    - by eeeeaaii
    I'm using JDO 2.3 on app engine. I was using the Master/Slave datastore for local testing and recently switched over to using the HRD datastore for local testing, and parts of my app are breaking (which is to be expected). One part of the app that's breaking is where it sends a lot of writes quickly - that is because of the 1-second limit thing, it's failing with a concurrent modification exception. Okay, so that's also to be expected, so I have the browser retry the writes again later when they fail (maybe not the best hack but I'm just trying to get it working quickly). But a weird thing is happening. Some of the writes which should be succeeding (the ones that DON'T get the concurrent modification exception) are also failing, even though the commit phase completes and the request returns my success code. I can see from the log that the retried requests are working okay, but these other requests that seem to have committed on the first try are, I guess, never "applied." But from what I read about the Apply phase, writing again to that same entity should force the apply... but it doesn't. Code follows. Some things to note: I am attempting to use automatic JDO caching. So this is where JDO uses memcache under the covers. This doesn't actually work unless you wrap everything in a transaction. all the requests are doing is reading a string out of an entity, modifying part of the string, and saving that string back to the entity. If these requests weren't in transactions, you'd of course have the "dirty read" problem. But with transactions, isolation is supposed to be at the level of "serializable" so I don't see what's happening here. the entity being modified is a root entity (not in a group) I have cross-group transactions enabled Another weird thing is happening. If the concurrent modification thing happens, and I subsequently edit more than 5 more entities (this is the max for cross-group transactions), then nothing happens right away, but when I stop and restart the server I get "IllegalArgumentException: operating on too many entity groups in a single transaction". Could it be possible that the PMF is returning the same PersistenceManager every time, or the PM is reusing the same transaction every time? I don't see how I could possibly get the above error otherwise. The code inside the transaction just edits one root entity. I can't think of any other way that GAE would give me the "too many entity groups" error. The relevant code (this is a simplified version) PersistenceManager pm = PMF.getManager(); Transaction tx = pm.currentTransaction(); String responsetext = ""; try { tx.begin(); // I have extra calls to "makePersistent" because I found that relying // on pm.close didn't always write the objects to cache, maybe that // was only a DataNucleus 1.x issue though Key userkey = obtainUserKeyFromCookie(); User u = pm.getObjectById(User.class, userkey); pm.makePersistent(u); // to make sure it gets cached for next time Key mapkey = obtainMapKeyFromQueryString(); // this is NOT a java.util.Map, just FYI Map currentmap = pm.getObjectById(Map.class, mapkey); Text mapData = currentmap.getMapData(); // mapData is JSON stored in the entity Text newMapData = parseModifyAndReturn(mapData); // transform the map currentmap.setMapData(newMapData); // mutate the Map object pm.makePersistent(currentmap); // make sure to persist so there is a cache hit tx.commit(); responsetext = "OK"; } catch (JDOCanRetryException jdoe) { // log jdoe responsetext = "RETRY"; } catch (Exception e) { // log e responsetext = "ERROR"; } finally { if (tx.isActive()) { tx.rollback(); } pm.close(); } resp.getWriter().println(responsetext); EDIT: so I have verified that it fails after exactly 5 transactions. Here's what I do: I create a Foo (root entity), do a bunch of concurrent operations on that Foo, and some fail and get retried, and some commit but don't apply (as described above). Then, I start creating more Foos, and do a few operations on those new Foos. If I only create four Foos, stopping and restarting app engine does NOT give me the IllegalArgumentException. However if I create five Foos (which is the limit for cross-group transactions), then when I stop and restart app engine, I do get the exception. So it seems that somehow these new Foos I am creating are counting toward the limit of 5 max entities per transaction, even though they are supposed to be handled by separate transactions. It's as if a transaction is still open and is being reused by the servlet when it handles the new requests for the 2nd through 5th Foos. EDIT2: it looks like the IllegalArgument thing is independent of the other bug. In other words, it always happens when I create five Foos, even if I don't get the concurrent modification exception. I don't know if it's a symptom of the same problem or if it's unrelated. EDIT3: I found out what was causing the (unrelated) IllegalArgumentException, it was a dumb mistake on my part. But the other issue is still happening. EDIT4: added pseudocode for the datastore access EDIT5: I am pretty sure I know why this is happening, but I will still award the bounty to anyone who can confirm it. Basically, I think the problem is that transactions are not really implemented in the local version of the datastore. References: https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/gVMS1dFSpcU https://groups.google.com/forum/?fromgroups=#!topic/google-appengine-java/deGasFdIO-M https://groups.google.com/forum/?hl=en&fromgroups=#!msg/google-appengine-java/4YuNb6TVD6I/gSttMmHYwo0J Because transactions are not implemented, rollback is essentially a no-op. Therefore, I get a dirty read when two transactions try to modify the record at the same time. In other words, A reads the data and B reads the data at the same time. A attempts to modify the data, and B attempts to modify a different part of the data. A writes to the datastore, then B writes, obliterating A's changes. Then B is "rolled back" by app engine, but since rollbacks are a no-op when running on the local datastore, B's changes stay, and A's do not. Meanwhile, since B is the thread that threw the exception, the client retries B, but does not retry A (since A was supposedly the transaction that succeeded).

    Read the article

  • Memcache vs MySQL in memory

    - by TimK
    I have a database that won't grow much in size. It's current size is about 1 GB. Achieving the fastest performance is desired. Question: When should I use Memcache vs simply using MySQL Innodb ability to store all my content within RAM (innodb_buffer_pool_size)?

    Read the article

  • How does py2exe actually -and simply explained- work? :)

    - by sandra
    Hi folks, I have a c++ app that calls another python one (bundled into an exe with py2exe) So I have 2 apps. So I was wondering: What if my c++ did what py2exe does? i.e. embed the python app in the c++ one. This way I won't depend on py2exe and its configurations nighmares (yes, it has some) Hence my questions: how does py2exe work (so I can do its job with my c++ app) What about just embedding the whole python app with the c++? I read the python doc about embedding, did an example (a very simple one that does PyRun_SimpleString) but what about a whole python app with tons of modules? (zipimport maybe?) I'd love to hear how you'd do that. Thanks a lot! :)

    Read the article

  • How to localize static content in database with Django

    - by man with python
    My app has tables for languages and countries (actually django-countries at the moment, but open for suggestions). The tables are populated when I initialize the database and remain static after that. What would be the ideal localization mechanism for the contents of these tables, so that I can show the country and language names to users in their chosen site language? I'm aware of projects like django-multilingual and transdb, but IMO they are more suitable for dynamic content, i.e. stuff that's supposed to be modified. Please englighten me!

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Is the ruby operator ||= intelligent?

    - by brad
    I have a question regarding the ||= statement in ruby and this is of particular interest to me as I'm using it to write to memcache. What I'm wondering is, does ||= check the receiver first to see if it's set before calling that setter, or is it literally an alias to x = x || y This wouldn't really matter in the case of a normal variable but using something like: CACHE[:some_key] ||= "Some String" could possibly do a memcache write which is more expensive than a simple variable set. I couldn't find anything about ||= in the ruby api oddly enough so I haven't been able to answer this myself. Of course I know that: CACHE[:some_key] = "Some String" if CACHE[:some_key].nil? would achieve this, I'm just looking for the most terse syntax.

    Read the article

  • Database triggers / referential integrity and in-memory caching

    - by Ran Biron
    Do you see database triggers / referential integrity rules being used in a way that changes actual data in the database (changing row w in table x causes a change in row y in table z)? If yes, How does this tie-in with the increasing popularity of in-memory caching (memcache and friends)? After all, these actions occur inside the database but the caching system must be aware of them in order to reflect to correct state (or at least invalidate the possibly changed state). I find it hard to believe that callbacks are implemented for such cases. Does anyone have real-world experience with such a setup / real-world experience with considering such a setup and abandoning it (which way did you go? if caching, how do you enforce integrity?)

    Read the article

  • Why is Routes.rb not loading the IPs from cache?

    - by Christian Fazzini
    I am testing this in local. My ip is 127.0.0.1. The ip_permissions table, is empty. When I browse the site, everything works as expected. Now, I want to simulate browsing the site with a banned IP. So I add the IP into the ip_permissions table via: IpPermission.create!(:ip => '127.0.0.1', :note => 'foobar', :category => 'blacklist') In Rails console, I clear the cache via; Rails.cache.clear. I browse the site. I don't get sent to pages#blacklist. If I restart the server. And browse the site, then I get sent to pages#blacklist. Why do I need to restart the server every time the ip_permissions table is updated? Shouldn't it fetch it based on cache? Routes look like: class BlacklistConstraint def initialize @blacklist = IpPermission.blacklist end def matches?(request) @blacklist.map { |b| b.ip }.include? request.remote_ip end end Foobar::Application.routes.draw do match '/(*path)' => 'pages#blacklist', :constraints => BlacklistConstraint.new .... end My model looks like: class IpPermission < ActiveRecord::Base validates_presence_of :ip, :note, :category validates_uniqueness_of :ip, :scope => [:category] validates :category, :inclusion => { :in => ['whitelist', 'blacklist'] } def self.whitelist Rails.cache.fetch('whitelist', :expires_in => 1.month) { self.where(:category => 'whitelist').all } end def self.blacklist Rails.cache.fetch('blacklist', :expires_in => 1.month) { self.where(:category => 'blacklist').all } end end

    Read the article

  • What will be the setup process for website development?

    - by Vijay Shanker
    Hi, I want to create a simple site for my personal usage. And this only in python based technologies. So I want to get a expert oponian on this topic. What should i used as platform? I did a search for available options and found Django, grok, web2py and many more of these. Which one a novice use should use? If I choose to use only the basic python scripts then what option i have to work on? http://wiki.python.org/moin/WebBrowserProgramming. This link on python site confused me more, instead of solving my curiosity about the topic. Please give some pointer to accurate and easy to understand reading materials. I have got a idea of developing java based web applications using either spring-webmvc and struts. Can I relate Java process to python process for web development?

    Read the article

  • Can I cache a ManyToOne hibernate object without it being lazy loaded?

    - by Andrew
    @ManyToOne @JoinColumn(name = "play_template_id", table = "team_play_mapping" ) public Play getPlay() { return play; } public void setPlay( Play play ) { this.play = play; } By default, this is eager loading. Can I get it so that it will read the play object from a cache without making it lazy loading? Am I correct that eager loading will force it to do a join query and hence no caching?

    Read the article

  • Potential issue when using memcache session save handler in PHP

    - by Sean
    I have two load balanced web servers, and I'm using the memcache session save handler with the save path pointing to two memcache servers. It's configured with session redundancy set to two (the number of memcache servers). So PHP is writing session data to both memcache servers, and when I take one of the servers down, everything seems to work fine since the session data has been written to both memcache servers. The problem seems to happen when I use the app for a while with only the one memcache server up, and then bring the other one back up. My theory is that the memcache server comes back up, and PHP then starts asking it for session data which isn't there since it was written to the other server while this one was down. Is there any merit to this theory? Should PHP be asking both servers for the session data and maybe there's some other problem?

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • NoSQL replacement for memcache

    - by Juan Antonio Gomez Moriano
    We are having a situation in which the values we store on memcache are bigger than 1MB. It is not possible to make such values smaller, and even if there was a way, we need to persist them to disk. One solution would be to recompile the memcache server to allow say 2MB values, but this is either not clean nor a complete solution (again, we need to persist the values). Good news is that We can predict quite acurately how many key/values pair we are going to have We can also predict the total size we will need. A key feature for us is the speed of memcache. So question is: is there any noSQL replacement for memcache which will allow us to have values longer than 1MB AND store them in disk without loss of speed? In the past I have used tokyotyrant/cabinet but seems to be deprecated now. Any idea?

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >