Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 200/223 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • implement SIMD in C++

    - by Hristo
    I'm working on a bit of code and I'm trying to optimize it as much as possible, basically get it running under a certain time limit. The following makes the call... static affinity_partitioner ap; parallel_for(blocked_range<size_t>(0, T), LoopBody(score), ap); ... and the following is what is executed. void operator()(const blocked_range<size_t> &r) const { int temp; int i; int j; size_t k; size_t begin = r.begin(); size_t end = r.end(); for(k = begin; k != end; ++k) { // for each trainee temp = 0; for(i = 0; i < N; ++i) { // for each sample int trr = trRating[k][i]; int ei = E[i]; for(j = 0; j < ei; ++j) { // for each expert temp += delta(i, trr, exRating[j][i]); } } myscore[k] = temp; } } I'm using Intel's TBB to optimize this. But I've also been reading about SIMD and SSE2 and things along that nature. So my question is, how do I store the variables (i,j,k) in registers so that they can be accessed faster by the CPU? I think the answer has to do with implementing SSE2 or some variation of it, but I have no idea how to do that. Any ideas? Thanks, Hristo

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Strategy for animating a lot of "LED's" - thread?, UIView animations? NSOperation? (iPhone)

    - by RickiG
    Hi I have to do some different views containing 72 LED lights. I built an LED Class so I can loop through the LED's and set them to different colors (Green, Red, Orange, Blue None etc.). The LED then loads the appropriate .png. This works fine, I loop over the LED's and set them. Now I know that at some time they will need to not just turn on/off change color, but will have to turn on with a small delay. Like an equalizer. I have a 5-10 views containing the 72 LED's and I would like to achieve the above with the minimum amount of memory/CPU strain. for(LED *l in self.ledArray) { [l display:Green]; } I simply loop as shown above and inside the LED is a switch case that does the correct logic. If this were actual LED's and a microController I would use sleep(100) or similar in the loop, but I would really like to avoid stuff like that for obvious reasons. I was thinking that doing a performOnThread withDelay would really be consuming, so would UIView animation changing the alpha and NSOperation would also be a lot of lifting for a small feature. Is there a both efficient and clever way to go around this? Thanks for any inspiration given:)

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Call method immediately after object construction in LINQ query

    - by Steffen
    I've got some objects which implement this interface: public interface IRow { void Fill(DataRow dr); } Usually when I select something out of db, I go: public IEnumerable<IRow> SelectSomeRows { DataTable table = GetTableFromDatabase(); foreach (DataRow dr in table.Rows) { IRow row = new MySQLRow(); // Disregard the MySQLRow type, it's not important row.Fill(dr); yield return row; } } Now with .Net 4, I'd like to use AsParallel, and thus LINQ. I've done some testing on it, and it speeds up things alot (IRow.Fill uses Reflection, so it's hard on the CPU) Anyway my problem is, how do I go about creating a LINQ query, which calls Fills as part of the query, so it's properly parallelized? For testing performance I created a constructor which took the DataRow as argument, however I'd really love to avoid this if somehow possible. With the constructor in place, it's obviously simple enough: public IEnumerable<IRow> SelectSomeRowsParallel { DataTable table = GetTableFromDatabase(); return from DataRow dr in table.Rows.AsParallel() select new MySQLRow(dr); } However like I said, I'd really love to be able to just stuff my Fill method into the LINQ query, and thus not need the constructor overload.

    Read the article

  • Should a setter return immediately if assigned the same value?

    - by Andrei Rinea
    In classes that implement INotifyPropertyChanged I often see this pattern : public string FirstName { get { return _customer.FirstName; } set { if (value == _customer.FirstName) return; _customer.FirstName = value; base.OnPropertyChanged("FirstName"); } } Precisely the lines if (value == _customer.FirstName) return; are bothering me. I've often did this but I am not that sure it's needed nor good. After all if a caller assigns the very same value I don't want to reassign the field and, especially, notify my subscribers that the property has changed when, semantically it didn't. Except saving some CPU/RAM/etc by freeing the UI from updating something that will probably look the same on the screen/whatever_medium what do we obtain? Could some people force a refresh by reassigning the same value on a property (NOT THAT THIS WOULD BE A GOOD PRACTICE HOWEVER)? 1. Should we do it or shouldn't we? 2. Why?

    Read the article

  • Linux time sample based profiler.

    - by Caspin
    short version: Is there a good time based sampling profiler for Linux? long version: I generally use OProfile to optimize my applications. I recently found a shortcoming that has me wondering. The problem was a tight loop spawning c++filt to demangle a c++ name. I only stumbled upon the code by accident while chasing down another bottleneck. The OProfile didn't show anything unusual about the code so I almost ignored it but my code sense told me to optimize the call and see what happened. I changed the popen of c++filt to abi::__cxa_demangle. The runtime went from more than a minute to a little over a second. About a x60 speed up. Is there a way I could have configured OProfile to flag the popen call? As the profile data sits now OProfile thinks the bottle neck was the heap and std::string calls (which BTW once optimized dropped the runtime to less than a second, more than x2 speed up). Here is my OProfile configuration: $ sudo opcontrol --status Daemon not running Event 0: CPU_CLK_UNHALTED:90000:0:1:1 Separate options: library vmlinux file: none Image filter: /path/to/excutable Call-graph depth: 7 Buffer size: 65536 Is there another profiler for Linux that could have found the bottleneck? I suspect the issue is that OProfile only logs its samples to the currently running process. I'd like it to always log its samples to the process I'm profiling. So if the process is currently switched out (blocking on IO or a popen call) OProfile would just place its sample at the blocked call. If I can't fix this, OProfile will only be useful when the executable is pushing near 100% CPU. It can't help with executables that that have inefficient blocking calls.

    Read the article

  • JAVASCRIPT ENABLED [closed]

    - by kirchoffs415
    HI, I hope somebody can help, i keep getting the following message when i log on-- Your Javascript is disabled. Limited functionality is available. it will stay for maybe a day sometimes two.I have uninstalled javascript and reinstalled but still the same. Iam using chrome. any help would be gratefull many thanks Dominic p.s. my system spec is as follows System InformationOS Name Microsoft® Windows Vista™ Home Premium Version 6.0.6002 Service Pack 2 Build 6002 Other OS Description Not Available OS Manufacturer Microsoft Corporation System Name DOM-PC System Manufacturer Dell Inc. System Model Inspiron 1545 System Type X86-based PC Processor Pentium(R) Dual-Core CPU T4200 @ 2.00GHz, 2000 Mhz, 2 Core(s), 2 Logical Processor(s) BIOS Version/Date Dell Inc. A05, 25/02/2009 SMBIOS Version 2.4 Windows Directory C:\Windows System Directory C:\Windows\system32 Boot Device \Device\HarddiskVolume3 Locale United Kingdom Hardware Abstraction Layer Version = "6.0.6002.18005" User Name DOM-PC\DOM Time Zone GMT Standard Time Installed Physical Memory (RAM) 3.00 GB Total Physical Memory 2.96 GB Available Physical Memory 1.38 GB Total Virtual Memory 5.89 GB Available Virtual Memory 4.25 GB Page File Space 3.00 GB Page File C:\pagefile.sys My System Specs

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Excel 2010 64 bit can't create .net object

    - by aboes81
    I have a simple class library that I use in Excel. Here is a simplification of my class... using System; using System.Runtime.InteropServices; namespace SimpleLibrary { [ComVisible(true)] public interface ISixGenerator { int Six(); } public class SixGenerator : ISixGenerator { public int Six() { return 6; } } } In Excel 2007 I would create a macro enabled workbook and add a module with the following code: Public Function GetSix() Dim lib As SimpleLibrary.SixGenerator lib = New SimpleLibrary.SixGenerator Six = lib.Six End Function Then in Excel I could call the function GetSix() and it would return six. This no longer works in Excel 2010 64bit. I get a Run-time error '429': ActiveX component can't create object. I tried changing the platform target to x64 instead of Any CPU but then my code wouldn't compile unless I unchecked the Register for COM interop option, doing so makes it so my macro enable workbook cannot see SimpleLibrary.dll as it is no longer regsitered. Any ideas how I can use my library with Excel 2010 64 bit?

    Read the article

  • Dynamic stack allocation in C++

    - by Poni
    I want to allocate memory on the stack. Heard of _alloca / alloca and I understand that these are compiler-specific stuff, which I don't like. So, I came-up with my own solution (which might have it's own flaws) and I want you to review/improve it so for once and for all we'll have this code working: /*#define allocate_on_stack(pointer, size) \ __asm \ { \ mov [pointer], esp; \ sub esp, [size]; \ }*/ /*#define deallocate_from_stack(size) \ __asm \ { \ add esp, [size]; \ }*/ void test() { int buff_size = 4 * 2; char *buff = 0; __asm { // allocate mov [buff], esp; sub esp, [buff_size]; } // playing with the stack-allocated memory for(int i = 0; i < buff_size; i++) buff[i] = 0x11; __asm { // deallocate add esp, [buff_size]; } } void main() { __asm int 3h; test(); } Compiled with VC9. What flaws do you see in it? Me for example, not sure that subtracting from ESP is the solution for "any kind of CPU". Also, I'd like to make the commented-out macros work but for some reason I can't.

    Read the article

  • Swing GUI using JNI crashes

    - by Div
    Hi, A java swing application(GUI) using JNI code to communicate with native C code. The Swing application launches properly and works fine. The GUI is used to start some customized system level tests(io,memory,cpu) and show their progress. The tests have to be left running at-least overnight to get the results. But, the next morning, GUI crashes and throws following message. Any pointers on source of the issue will be greatly appreciated. Java version: java 1.5 / java 1.6 OS: Solaris 10. Thanks, Div =============MESSAGES================== # uname -a SunOS Generic_127127-11 sun4v sparc SUNW, # # # # An unexpected error has been detected by HotSpot Virtual Machine: # # SIGSEGV (0xb) at pc=0xff268924, pid=9473, tid=272 # # Java VM: Java HotSpot(TM) Server VM (1.5.0_14-b03 mixed mode) # Problematic frame: # C [libc.so.1+0x68924] strstr+0x20 # # An error report file with more information is saved as hs_err_pid9473.log # # If you would like to submit a bug report, please visit: # HotSpot Virtual Machine Error Reporting Page # ============================= Another machine: # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0xff231fd0, pid=1406, tid=180 # # JRE version: 6.0_18-b07 # Java VM: Java HotSpot(TM) Server VM (16.0-b13 mixed mode solaris-sparc ) # Problematic frame: # C [libc.so.1+0x31fd0] strcpy+0x70 # # An error report file with more information is saved as: # /usr/sunvts/bin/hs_err_pid1406.log #

    Read the article

  • Correct Delphi compiler switches to stop in the user's code, not my component's

    - by Jeremy Mullin
    I'm modifying our VCL components so the end user's application links to our dcu files, instead of building our source code each time. We have everything working, but I want the debugger to stop on the user's code when an exception is raised. At first it would stop in our dcu and open the CPU window. I was able to prevent that by removing debug info from the dcu files. But now it still doesn't stop in the users code (like DevExpress libraries and others do). The following screencast is a short example. The first time I cause an exception in the DevExpress code, and the debugger correctly stops in my button event. The second time I cause an exception in my components, but the debugger doesn't have my button event on the call stack, and doesn't show me where the problem was. Any ideas why? http://screencast.com/t/NjhlOTRk Currently building the DCU's with these options: -$W+ -$D- -h -w -q Update: The TDataSet methods in between my component and the button event seem to cause this behavior. If I instead call a direct method of my table, I get the expected behavior. I'm guessing there isn't anything I can do about this, but I'm still curious why it happens.

    Read the article

  • floating point equality in Python and in general

    - by eric.frederich
    I have a piece of code that behaves differently depending on whether I go through a dictionary to get conversion factors or whether I use them directly. The following piece of code will print 1.0 == 1.0 -> False But if you replace factors[units_from] with 10.0 and factors[units_to ] with 1.0 / 2.54 it will print 1.0 == 1.0 -> True #!/usr/bin/env python base = 'cm' factors = { 'cm' : 1.0, 'mm' : 10.0, 'm' : 0.01, 'km' : 1.0e-5, 'in' : 1.0 / 2.54, 'ft' : 1.0 / 2.54 / 12.0, 'yd' : 1.0 / 2.54 / 12.0 / 3.0, 'mile' : 1.0 / 2.54 / 12.0 / 5280, 'lightyear' : 1.0 / 2.54 / 12.0 / 5280 / 5.87849981e12, } # convert 25.4 mm to inches val = 25.4 units_from = 'mm' units_to = 'in' base_value = val / factors[units_from] ret = base_value * factors[units_to ] print ret, '==', 1.0, '->', ret == 1.0 Let me first say that I am pretty sure what is going on here. I have seen it before in C, just never in Python but since Python in implemented in C we're seeing it. I know that floating point numbers will change values going from a CPU register to cache and back. I know that comparing what should be two equal variables will return false if one of them was paged out while the other stayed resident in a register. Questions What is the best way to avoid problems like this?... In Python or in general. Am I doing something completely wrong? Side Note This is obviously part of a stripped down example but what I'm trying to do is come with with classes of length, volume, etc that can compare against other objects of the same class but with different units. Rhetorical Questions If this is a potentially dangerous problem since it makes programs behave in an undetermanistic matter, should compilers warn or error when they detect that you're checking equality of floats Should compilers support an option to replace all float equality checks with a 'close enough' function? Do compilers already do this and I just can't find the information.

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • C# Breakpoint Weirdness

    - by Dan
    In my program I've got two data files A and B. The data in A is static and the data in B refers back to the data in A. In order to make sure the data in B is invalidated when A is changed, I keep an identifier for each of the links which is a long byte-string identifying the data. I get this string using BitConverter on some of the important properties. My problem is that this scheme isn't working. I save the identifiers initially, and with I reload (with the exact same data in A) the identifiers don't match anymore. It seems the bit converter gives different results when I go to save. The really weird thing about it is, if I place a breakpoint in the save code, I can see the identifier it's writing to the file is fine, and the next load works. If I don't place a breakpoint and say print the identifiers to console instead, they're totally different. It's like when my program is running at full-speed the CPU messes up some instructions. This isn't the first time something like this happens to me. I've seen it in other projects. What gives? Has anyone every experienced this kind of debugging weirdness? I can't explain how stopping the program and not stopping it can change the output. Also, it's not a hardware problem because this happens on my laptop as well.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • how to update UI controls in cocoa application from background thread

    - by AmitSri
    following is .m code: #import "ThreadLabAppDelegate.h" @interface ThreadLabAppDelegate() - (void)processStart; - (void)processCompleted; @end @implementation ThreadLabAppDelegate @synthesize isProcessStarted; - (void)awakeFromNib { //Set levelindicator's maximum value [levelIndicator setMaxValue:1000]; } - (void)dealloc { //Never called while debugging ???? [super dealloc]; } - (IBAction)startProcess:(id)sender { //Set process flag to true self.isProcessStarted=YES; //Start Animation [spinIndicator startAnimation:nil]; //perform selector in background thread [self performSelectorInBackground:@selector(processStart) withObject:nil]; } - (IBAction)stopProcess:(id)sender { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } - (void)processStart { int counter = 0; while (counter != 1000) { NSLog(@"Counter : %d",counter); //Sleep background thread to reduce CPU usage [NSThread sleepForTimeInterval:0.01]; //set the level indicator value to showing progress [levelIndicator setIntValue:counter]; //increment counter counter++; } //Notify main thread for process completed [self performSelectorOnMainThread:@selector(processCompleted) withObject:nil waitUntilDone:NO]; } - (void)processCompleted { //Stop Animation [spinIndicator stopAnimation:nil]; //set process flag to false self.isProcessStarted=NO; } @end I need to clear following things as per the above code. How to interrupt/cancel processStart while loop from UI control? I also need to show the counter value in main UI, which i suppose to do with performSelectorOnMainThread and passing argument. Just want to know, is there anyother way to do that? When my app started it is showing 1 thread in Activity Monitor, but when i started the processStart() in background thread its creating two new thread,which makes the total 3 thread until or unless loop get finished.After completing the loop i can see 2 threads. So, my understanding is that, 2 thread created when i called performSelectorInBackground, but what about the thrid thread, from where it got created? What if thread counts get increases on every call of selector.How to control that or my implementation is bad for such kind of requirements? Thanks

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >