Search Results

Search found 1108 results on 45 pages for 'stats'.

Page 10/45 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Balancing Player vs. Monsters: Level-Up Curves

    - by ashes999
    I've written a fair number of games that have RPG-like "levelling up," where the player gains experience for killing monsters/enemies, and eventually, reaches a new level, where their stats increase. How do you find a balance between player growth, monster strength, and difficulty? The extreme ends of this spectrum are: Player levels up really fast and blows away monsters without much effort Monsters are incredibly strong and even at low levels, are very difficult to beat I've also tried a strange situation of making enemies relative to players, i.e. an enemy will always be at 50% or 100% or 150% of player stats (thus requiring the player to use other techniques instead of brute strength to succeeed). But where's the balance, and how do you find it? Edit: For example, I am expecting to hear things like: Balance high instead of balance low (200 HP and 20 str is easier to balance than 20 HP and 2 str) Look at easiest vs. hardest monsters, and see what you have in terms of a range

    Read the article

  • Are there statistics or time series of open bugs in Ubuntu?

    - by aroque
    I would like to know how the number of bugs in Ubuntu (open, closed, critical, etc) has evolved with time. It's a sort of scientific curiosity I have, but it would also give me a feeling how the community has changed over time, how it has coped with the challenges (I think of Unity in particular) and what's its status now. Has anyone collected these data over the years? If yes, are they publicly available? I know this information can be gathered from Launchpad itself and actually I found a website that had data from mid 2008 to early 2009. I found Ubuntu live stats, which shows live messages related to Ubuntu, but does not aggregate bug statistics. Finally there are some stats on the Ubuntu Weekly Newsletter but they only show diffs of bugs closed during the last week.

    Read the article

  • Is there a (free) reliable place to get statistics from sites, more reliable than Alexa, Quantcast, Compete?

    - by S.gfx
    I mean, seems there's no way. I am just asking in case someone knows of a recent new site being more accurate. I am aware of Alexa's, Compete and Quantcast inaccuracies and/or limited system/range of sites to get their stats. I also know about websitegrader perhaps being a little more accurate (although not sure if that's the data I am after). And read Seomoz tools are reliable. I am yet though looking for a free solution, a 'reliable' Alexa. And not a place depending on a toolbar installation, an easy to trick place, or one with stats way too off, or of a very limited range of sites. I am almost sure there's nothing new, but I wanted to be sure.

    Read the article

  • Sudden drop of pageviews/visit and increase of bounce rate in Analytics

    - by Tebb
    Google analytics stats: 04 june 2012 Visits: 4.423 Unique visitors: 3.558 Pageviews: 77.352 Pageviews / visit: 17,49 Visit length: 00:06:26 Bounce rate: 1,09% 05 june 2012 Visits: 4.652 Unique visitors: 3.825 Pageviews: 45.087 Pageviews / visit: 9,69 Visit length: 00:06:45 Bounce rate: 19,60% From one day to another the bounce rate went from 1% to 19%, the pageviews dropped by half so did the pageviews/visit. The only thing I changed (If I remember correctly) on the site, was an advertisment that used a javascript. Could this be the reason? and, if it is, how can I know which one is the real stats?

    Read the article

  • 301 redirect from a country specific domain

    - by Raj
    I originally started using a .do domain extension for my site, but later realized that this country specific domain would prevent us from appearing in search results for places outside of the Dominican Republic. We started using a .co domain extension and redirected all requests to the new domain using an HTTP 301. The "Crawl Stats" in Google Webmaster Tools shows me that the .co domain is being crawled, but the "Index Status" shows the number of pages indexed at 0. The "Crawl Stats" for the .do domain says that it's being crawled and the "Index Status" shows a number greater than 0. I also set a "Change Of Address" in Google Webmaster Tools to have the .do domain point to the new .co domain. We're still not appearing in search results at all even for very specific strings where I would expect to find us. Am I doing something wrong?

    Read the article

  • Renaming site, moving domain after 3 yrs. Should I reconsider?

    - by user6162
    After recently announcing it, one of my readers with a background in website management sent me an email saying that I should reconsider moving my domain from [keyword]news.org to [same keyword]lab.com. It is a content-heavy news site around products that I hope to eventually build into a more comprehensive authority for the [keyword] industry I'm covering w/ B2B services, merchandise, etc. The current domain is a bit generic & I think the new one will be more marketable. Relevant stats: 320k-350k pv's/month Google brings in 39% and Yahoo/Bing 4% of traffic 10-11% of monthly search strings contains " news" SEOMoz Open Site Explorer stats: Page authority: 65/100 Domain authority: 58/100 Linking root domains: 226 Total links: 35,700 I am familiar with what I need to do as far as 301 redirects, etc. I was assuming that I'd be ok after following Google's recommended procedures but now I am not so sure.

    Read the article

  • Is there a problem if I don't setup google analytics in my website?

    - by sagar
    is there a problem if I don't setup google analytics in my website? I have read lot of articles describing use of google analytics but I have not added it in my site. In Cpanel I can check all things like stats, unique visitor etc and in google analytics its almost same but if I want sponsor for my site then is it necessary to have google analytics? or can cpanel stats be shown to sponsor? Thanks for reading! I hope I get an answer!

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • HAproxy with MySQL Master-Master Replication incredibly slow

    - by Yayap
    I have two MySQL servers in multi-master mode, with an HAproxy machine for simple load balancing/redundancy. When I am connected to one of the servers directly and try to update about 100,000 entries, it is completed including replication in about half a minute. When connecting through the proxy it takes usually over three whole minutes. Is it normal to have that type of latency? Is something amiss with my proxy configuration (included below)? This is getting really frustrating as I assumed the proxy would do some sort of load balancing, or at least have little to no overhead. #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 # chroot /var/lib/haproxy # pidfile /var/run/haproxy.pid maxconn 4096 user haproxy group haproxy daemon #debug #quiet # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global #option tcplog option dontlognull option tcp-smart-accept option tcp-smart-connect #option http-server-close #option forwardfor except 127.0.0.0/8 #option redispatch retries 3 #timeout http-request 10s #timeout queue 1m timeout connect 400 timeout client 500 timeout server 300 #timeout http-keep-alive 10s #timeout check 10s maxconn 2000 listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option tcpka option httpchk server db01 192.168.15.118:3306 weight 1 inter 1s rise 1 fall 1 server db02 192.168.15.119:3306 weight 1 inter 1s rise 1 fall 1

    Read the article

  • Monitoring memcached with plink

    - by kojiro
    I need a telnet client that can take commands from a file or stdin so I can do some quick-and-dirty automatic monitoring of memcached. I thought plink would be good for this, but it seems to be doing something beyond what I need: If I telnet into localhost 11211 and write stats, I get the memcached stats, like so: $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats STAT pid 25099 STAT uptime 91182 STAT time 1349191864 STAT version 1.4.5 STAT pointer_size 64 STAT rusage_user 3.570000 STAT rusage_system 2.740000 STAT curr_connections 5 STAT total_connections 23 STAT connection_structures 11 STAT cmd_get 0 STAT cmd_set 0 STAT cmd_flush 0 STAT get_hits 0 STAT get_misses 0 STAT delete_misses 0 STAT delete_hits 0 STAT incr_misses 0 STAT incr_hits 0 STAT decr_misses 0 STAT decr_hits 0 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT auth_cmds 0 STAT auth_errors 0 STAT bytes_read 82184 STAT bytes_written 7210 STAT limit_maxbytes 67108864 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 4 STAT conn_yields 0 STAT bytes 0 STAT curr_items 0 STAT total_items 0 STAT evictions 0 STAT reclaimed 0 END But with plink, I get an odd error. I'm using this command: watch -n 30 plink -v -telnet -P 11211 127.0.0.1 <<< $'\nstats' The first time through I get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA ERROR STAT pid 25099 STAT uptime 91245 STAT time 1349191927 STAT version 1.4.5 … END But when watch repeats the command I just get: Looking up host "127.0.0.1" Connecting to 127.0.0.1 port 11211 client: WILL NAWS client: WILL TSPEED client: WILL TTYPE client: WILL NEW_ENVIRON client: DO ECHO client: WILL SGA client: DO SGA Failed to connect to 127.0.0.1: Connection reset by peer Connection reset by peer FATAL ERROR: Connection reset by peer What is plink doing here that is different from normal telnet? How should I be going about this? (I'm not married to plink, but I need a way to continuously send simple telnet commands to memcached without writing a full-fledged perl script.)

    Read the article

  • Web Safe Area (optimal resolution) for web app design

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'web safe area' should I optimize the app layout and design. I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number. My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I tried to Google some design best practices but most still talk about designing around 1024x768 which seems to be quickly disappearing. Any help on this would be greatly appreciated! Thanks.

    Read the article

  • How to calculate cointegrations of two lists?

    - by Damiano
    Hello everybody! Thank you in advance for your help! I have two lists with some stocks prices, example: a = [10.23, 11.65, 12.36, 12.96] b = [5.23, 6.10, 8.3, 4.98] I can calculate the correlation of these two lists, with: import scipy.stats scipy.stats.pearsonr(a, b)[0] But, I didn't found a method to calculate the co-integration of two lists. Could you give me some advices? Thank you very much!

    Read the article

  • Python 3.2: How to pass a dictionary into str.format()

    - by Robert Dailey
    I've been reading the Python 3.2 docs about string formatting but it hasn't really helped me with this particular problem. Here is what I'm trying to do: stats = { 'copied': 5, 'skipped': 14 } print( 'Copied: {copied}, Skipped: {skipped}'.format( stats ) ) The above code will not work because the format() call is not reading the dictionary values and using those in place of my format placeholders. How can I modify my code to work with my dictionary?

    Read the article

  • How to change cpufreq settings in Kubuntu

    - by Mr Woody
    I have been using Kubuntu, and I would like to change the cpufreq settings. My understanding is that there is no applet for that, and I would have to do it with a script. So I run a command like this: sudo cpufreq-set -g userspace -c 0 -d 800Mhz -u 1200Mhz and when I type cpufreq-info, I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 1.20 GHz. The governor "userspace" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.50 GHz:70.06%, 2.50 GHz:0.97%, 2.00 GHz:4.85%, 1.60 GHz:0.35%, 1.20 GHz:2.89%, 800 MHz:20.88% (193873) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 2.00 GHz and 2.00 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 2.00 GHz. cpufreq stats: 2.50 GHz:83.43%, 2.50 GHz:1.03%, 2.00 GHz:4.28%, 1.60 GHz:0.01%, 1.20 GHz:1.74%, 800 MHz:9.50% (3208) which shows that everything worked well (on cpu 0). The problem is that if I run cpufreq-info again after few minutes I get cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:69.73%, 2.50 GHz:0.97%, 2.00 GHz:4.83%, 1.60 GHz:0.35%, 1.20 GHz:2.92%, 800 MHz:21.20% (193880) analyzing CPU 1: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 10.0 us. hardware limits: 800 MHz - 2.50 GHz available frequency steps: 2.50 GHz, 2.50 GHz, 2.00 GHz, 1.60 GHz, 1.20 GHz, 800 MHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 800 MHz and 800 MHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 800 MHz. cpufreq stats: 2.50 GHz:82.94%, 2.50 GHz:1.03%, 2.00 GHz:4.33%, 1.60 GHz:0.01%, 1.20 GHz:1.73%, 800 MHz:9.96% (3215) so it looks like some other process changed the settings. Does anyone know how to fix this? I also tried many different settings, but I get similar behavior.

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup]

    - by Gopinath
    The ICC World Cup 2011 has started with a bang today and the first match between India vs Bangladesh was a cracker. India trashed Bangladesh with a huge margin, thanks to Sehwag for scoring an entertaining 175 runs in 140 runs. At the moment it’s very clear that whole India is gripped with cricket fever and so the rest of fans across the globe. Couple of days ago we blogged about how to watch live streaming of ICC cricket world cup online for free as well as top 10 websites to keep track live scores on your computers. What about tracking live cricket scores on mobiles phones? Here is our guide to top mobile apps available for Symbian(Nokia), Android, iOS and Windows mobiles. By the way, we are covering free apps alone in this post. Why to waste money when free apps are available? SnapTu – Symbian Mobile App SnapTu is a multi feature application that lets you to track live cricket scores, read latest news and check stats published on cric info. SnapTu has tie up with Cric Info and accessing all of CricInfo website on your mobile is very easy. Along with live scores, SnapTu also lets you access your Facebook, Twitter and Picassa on your mobile. This is my favourite application to track cricket on Symbian mobiles. Download SnapTu for your mobiles here Yahoo! Cricket – Symbian & iOS App Yahoo! Cricket Scores is another dedicated application to catch up with live scores and news on your Nokia mobiles and iPhones. This application is developed by Yahoo!, the web giant as well as the official partner of ICC. Features of the app at a glance Cricket: Get a summary page with latest scores, upcoming matches and details of the recent matches News: View sections devoted to the latest news, interviews and photos Statistics: Find the latest team and player stats Download Yahoo! Cricket For Symbian Phones   Download Yahoo! Cricket For iOS ESPN CricInfo – Android and iOS App Is there any site that is better than CricInfo to catch up with latest cricket news and live scores? I say No. ESPN CricInfo is the best website available on the web to get up to the minute  cricket information with in-depth analysis from cricket experts. The live commentary provided by CricInfo site is equally enjoyable as watching live cricket on TV. CricInfo guys have their official applications for Android mobiles and iOS devices and you accessing ball by ball updates on these application is joy. Download ESPN Crick Info App: Android Version, iPhone Version NDTV Cricket – Android, iOS and Blackberry App NDTV Cricket App is developed by NDTV, the most popular English TV news channel in India. This application provides live coverage of international and domestic cricket (Test, ODI & T20) along with latest News, Photos, Videos and Stats. This application is available for iOS devices(iPhones, iPads, iPod Touch), Android mobiles and Blackberry devices. Download NDTV Cricket for iOS here & here    Download NDTV Apps For Rest of OSs ECB Cricket – Symbian, iOS & Android App If you are an UK citizen then  this may be the right application to download for getting live cricket score updates as well as latest news about England Cricket Board. ECB Cricket is an official application of England Cricket Board Download ECB Cricket : Android Version, iPhone Version, Symbian Version Are there any better apps that we missed to feature in this list? This article titled,Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • Multiple Base Addresses and Multiple Endpoints in WCF

    - by mnhab
    I'm using two bindings TCP and HTTP. I want to give mex data on both bindings. What I want is that the mexHttpBinding only exposes the HTTP services while the mexTcpBinding exposes TCP services only. Or is this possible that I access stats service only from HTTP binding and the eventLogging service from TCP? For Example: For TCP I should only have net.tcp://localhost:9001/ABC/mex net.tcp://localhost:9001/ABC/eventLogging For HTTP http://localhost:9002/ABC/stats http://localhost:9002/ABC/mex When I connect with any of the base address (using the WCF Test Client) I'm able to access all the services? Like when I connect with net.tcp://localhost:9001/ABC I'm able to use the services which are offered on the HTTP binding. Why is that so? <system.serviceModel> <services> <service behaviorConfiguration="ABCServiceBehavior" name="ABC.Data.DataServiceWCF"> <endpoint address="eventLogging" binding="netTcpBinding" contract="ABC.Campaign.IEventLoggingService" /> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint address="stats" binding="basicHttpBinding" contract="ABC.Data.IStatsService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:9001/ABC" /> <add baseAddress="http://localhost:9002/ABC" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ABCServiceBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • Fitting Gaussian KDE in numpy/scipy in Python

    - by user248237
    I am fitting a Gaussian kernel density estimator to a variable that is the difference of two vectors, called "diff", as follows: gaussian_kde_covfact(diff, smoothing_param) -- where gaussian_kde_covfact is defined as: class gaussian_kde_covfact(stats.gaussian_kde): def __init__(self, dataset, covfact = 'scotts'): self.covfact = covfact scipy.stats.gaussian_kde.__init__(self, dataset) def _compute_covariance_(self): '''not used''' self.inv_cov = np.linalg.inv(self.covariance) self._norm_factor = sqrt(np.linalg.det(2*np.pi*self.covariance)) * self.n def covariance_factor(self): if self.covfact in ['sc', 'scotts']: return self.scotts_factor() if self.covfact in ['si', 'silverman']: return self.silverman_factor() elif self.covfact: return float(self.covfact) else: raise ValueError, \ 'covariance factor has to be scotts, silverman or a number' def reset_covfact(self, covfact): self.covfact = covfact self.covariance_factor() self._compute_covariance() This works, but there is an edge case where the diff is a vector of all 0s. In that case, I get the error: File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/stats/kde.py", line 334, in _compute_covariance self.inv_cov = linalg.inv(self.covariance) File "/srv/pkg/python/python-packages/python26/scipy/scipy-0.7.1/lib/python2.6/site-packages/scipy/linalg/basic.py", line 382, in inv if info>0: raise LinAlgError, "singular matrix" numpy.linalg.linalg.LinAlgError: singular matrix What's a way to get around this? In this case, I'd like it to return a density that's essentially peaked completely at a difference of 0, with no mass everywhere else. thanks.

    Read the article

  • JSONP request using jquery $.getJSON not working on well formed JSON.

    - by Antti
    I'm not sure is it possible now from the url I am trying. Please see this url: http://www.heiaheia.com/voimakaksikko/stats.json It always serves the same padding function "voimakaksikkoStats". It is well formed JSON, but I have not been able to load it from remote server. Does it need some work from the server side or can it be loaded with javascript? I think the problems gotta to have something to with that callback function... JQuery is not requirement, but it would be nice. This (callback=voimakaksikkoStats) returns nothing (firebug - net - response), and alert doesn't fire: $.getJSON("http://www.heiaheia.com/voimakaksikko/stats.json?callback=voimakaksikkoStats", function(data){ alert(data); }) but this (callback=?): $.getJSON("http://www.heiaheia.com/voimakaksikko/stats.json?callback=?", function(data){ alert(data); }) returns: voimakaksikkoStats({"Top5Sports":[],"Top5Tests":{"8":"No-exercise ennuste","1":"Painoindeksi","2":"Vy\u00f6t\u00e4r\u00f6n ymp\u00e4rys","10":"Cooperin testi","4":"Etunojapunnerrus"},"Top5CitiesByTests":[],"Top5CitiesByExercises":[],"ExercisesLogged":0,"Top5CitiesByUsers":[""],"TestsTaken":22,"RegisteredUsers":1}); But I cannot access it... In both examples the alert never fires. Can someone help?

    Read the article

  • Alternatives to LINQ To SQL on high loaded pages

    - by Alex
    To begin with, I LOVE LINQ TO SQL. It's so much easier to use than direct querying. But, there's one great problem: it doesn't work well on high loaded requests. I have some actions in my ASP.NET MVC project, that are called hundreds times every minute. I used to have LINQ to SQL there, but since the amount of requests is gigantic, LINQ TO SQL almost always returned "Row not found or changed" or "X of X updates failed". And it's understandable. For instance, I have to increase some value by one with every request. var stat = DB.Stats.First(); stat.Visits++; // .... DB.SubmitChanges(); But while ASP.NET was working on those //... instructions, the stats.Visits value stored in the table got changed. I found a solution, I created a stored procedure UPDATE Stats SET Visits=Visits+1 It works well. Unfortunately now I'm getting more and more moments like that. And it sucks to create stored procedures for all cases. So my question is, how to solve this problem? Are there any alternatives that can work here? I hear that Stackoverflow works with LINQ to SQL. And it's more loaded than my site.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >