Search Results

Search found 16150 results on 646 pages for 'si keep'.

Page 130/646 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • How can I encrypt CoreData contents on an iPhone

    - by James A. Rosen
    I have some information I'd like to store statically encrypted on an iPhone application. I'm new to iPhone development, some I'm not terribly familiar with CoreData and how it integrates with the views. I have the data as JSON, though I can easily put it into a SQLITE3 database or any other backing data format. I'll take whatever is easiest (a) to encrypt and (b) to integrate with the iPhone view layer. The user will need to enter the password to decrypt the data each time the app is launched. The purpose of the encryption is to keep the data from being accessible if the user loses the phone. For speed reasons, I would prefer to encrypt and decrypt the entire file at once rather than encrypting each individual field in each row of the database. Note: this isn't the same idea as Question 929744, in which the purpose is to keep the user from messing with or seeing the data. The data should be perfectly transparent when in use. Also note: I'm willing to use SQLCipher to store the data, but would prefer to use things that already exist on the iPhone/CoreData framework rather than go through the lengthy build/integration process involved.

    Read the article

  • Linux error when resume from RAM

    - by TuxPotato
    The last two times that I have resumed my laptop from sleep, it has hung and given me this set of errors: [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 hda_intel: azx_get_responce timeout, switching to single_cmd mode: last cmd=0x01170700 ata6: softreset failed (device not ready) ata4: softreset failed (device not ready) [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 The last two messages repeat two more times. The first time this happened, Linux's Magic SysRq worked and did a soft reboot, and after that everything was fine till it went to sleep again. It wakes up and gives me this. Here are the laptop stats: Toshiba Satellite L455D-S5976 AMD Sempron SI-42 Processor 2GB DDR2 RAM HD TruBrite Display ATI Radeon Graphics (integrated) Running Ubuntu 10.10 32 bit I'm not sure about the hard drive, but its a 250GB drive with one NTFS partition, two Hidden NTFS, one Linux Swap, and one Ext4. Can someone tell me whats wrong with my laptop? NOTE: This only happens when I close the screen. My computer doesn't go to sleep with the screen open.

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • Is background tasks the solution for this problem?

    - by Trinca
    Hi. I need to develop an enterprise app that monitors the network traffic. Basically it detects if the user is in wi-fi or cellular data and save the amount of bytes was sent and received in a period of time. I saw an App at the AppStore that do exactly this job. Detecting wi-fi or cellular data is quite simple using the Reachability Sample provided by Apple. My problem is to keep monitoring the bytes sent and received while the app is in background. As it is an enterprise App, I used UIBackgroundModes "voip" to avoid the app to be terminated. I also installed the setKeepAliveTimeout method and I'm able to see the logs each 10 minutes, BUT only for 10 seconds after the method runs. I mean, setKeepAliveTimeout brings my App to run a Timer for 10 seconds each 1o minutes. I'm thinking wether or not a task in background is the best solution for my problem. I'll appreciate any comments. EDIT: Ok guys. Thats the perfect way to do it. First of all you must read this: http://www.christian-fries.de/blog/files/tag-ios.html I tried this and it works really fine. All we need to do is to create a second thread detached from the main one. This way we have a continuos threading running forever. You must see the GCD docs at Apple's website also. Second thing you should consider for an enterprise App is to set it up as a voip App, this way iOS will put your App running even after a reboot. It's a special behavior iOS has to keep voip Apps running. Thats it guys. I hope it can help you.

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Apache Consuming Resources

    - by Chris Edwards
    Our web server suddenly has been giving us load issues. After I restart Apache the load stays low for a few hours up to a day or so then its back up to around 3.0 until I restart Apache again. Any suggestions on tracking down what is causing this? Thanks! Chris Edwards top - 20:15:05 up 19 days, 10:59, 1 user, load average: 2.11, 2.17, 2.47 Tasks: 532 total, 6 running, 525 sleeping, 0 stopped, 1 zombie Cpu(s): 11.5%us, 0.4%sy, 0.0%ni, 88.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 32842656k total, 13185872k used, 19656784k free, 6143740k buffers Swap: 1048568k total, 0k used, 1048568k free, 3515252k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19089 apache 20 0 1912m 1.5g 6584 R 99.6 4.9 71:01.53 /usr/sbin/httpd 21136 apache 20 0 392m 55m 5736 R 95.0 0.2 0:03.45 /usr/sbin/httpd 21139 apache 20 0 374m 38m 5808 S 40.5 0.1 0:04.91 /usr/sbin/httpd 21124 apache 20 0 389m 51m 5948 R 38.9 0.2 0:03.15 /usr/sbin/httpd 21111 apache 20 0 371m 35m 5964 S 18.8 0.1 0:01.22 /usr/sbin/httpd 21127 apache 20 0 375m 39m 5832 S 17.8 0.1 0:01.66 /usr/sbin/httpd 21128 apache 20 0 374m 38m 5792 S 16.2 0.1 0:01.56 /usr/sbin/httpd 21110 apache 20 0 374m 38m 5848 S 15.9 0.1 0:01.02 /usr/sbin/httpd 21113 apache 20 0 374m 38m 5836 S 15.9 0.1 0:02.16 /usr/sbin/httpd 21077 apache 20 0 379m 43m 6408 S 11.0 0.1 0:07.22 /usr/sbin/httpd 21101 apache 20 0 384m 49m 6988 R 5.8 0.2 0:04.47 /usr/sbin/httpd 21112 apache 20 0 374m 38m 5956 R 2.6 0.1 0:01.61 /usr/sbin/httpd

    Read the article

  • saslauthd using too much memory

    - by Brian Armstrong
    Woke up today to see my site slow/unresponsive. Pulled up top and it looks like a ton of saslauthd processes have spun up using about 64m of RAM each, causing the machine to enter swap space. I've never seen this many used on there. top - 16:54:13 up 85 days, 11:48, 1 user, load average: 0.32, 0.50, 0.38 Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 97.3%id, 0.2%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 1048796k total, 1025904k used, 22892k free, 14032k buffers Swap: 2097144k total, 332460k used, 1764684k free, 194348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 848 admin 20 0 263m 115m 4840 S 0 11.3 5:02.91 ruby 906 admin 20 0 265m 113m 4828 S 0 11.1 5:37.24 ruby 30484 admin 20 0 248m 91m 4256 S 6 9.0 219:02.30 delayed_job 4075 root 20 0 160m 65m 952 S 0 6.4 0:24.22 saslauthd 4080 root 20 0 162m 64m 936 S 0 6.3 0:24.48 saslauthd 4079 root 20 0 162m 64m 936 S 0 6.3 0:24.70 saslauthd 4078 root 20 0 164m 63m 936 S 0 6.2 0:24.66 saslauthd 4077 root 20 0 163m 62m 936 S 0 6.1 0:24.66 saslauthd 3718 mysql 20 0 312m 52m 3588 S 1 5.1 3499:40 mysqld 699 root 20 0 72744 7640 2164 S 0 0.7 0:00.50 ruby 15701 postfix 20 0 106m 5712 4164 S 1 0.5 0:00.50 smtpd 15702 postfix 20 0 52444 3252 2452 S 1 0.3 0:00.06 cleanup 4062 postfix 20 0 41884 3104 1788 S 0 0.3 125:26.01 qmgr 15683 root 20 0 51504 2780 2180 S 0 0.3 0:00.04 sshd 14595 postfix 20 0 52308 2548 2304 S 1 0.2 0:24.60 proxymap 15483 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.38 smtp 15486 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.36 smtp 15488 postfix 20 0 43380 2540 1992 S 0 0.2 0:00.38 smtp 15485 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.36 smtp 15489 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.40 smtp Wasn't sure what Saslauthd is, Google says it handles plantext authentication. The machine has been sending a lot of email through postfix, so this could be related. Anyone know why so many may have spun up? Are they safe to kill? Thanks!

    Read the article

  • What are some Java memory management best practices?

    - by Ascalonian
    I am taking over some applications from a previous developer. When I run the applications through Eclipse, I see the memory usage and the heap size increase a lot. Upon further investigation, I see that they were creating an object over-and-over in a loop as well as other things. I started to go through and do some clean up. But the more I went through, the more questions I had like "will this actually do anything?" For example, instead of declaring a variable outside the loop mentioned above and just setting its value in the loop... they created the object in the loop. What I mean is: for(int i=0; i < arrayOfStuff.size(); i++) { String something = (String) arrayOfStuff.get(i); ... } versus String something = null; for(int i=0; i < arrayOfStuff.size(); i++) { something = (String) arrayOfStuff.get(i); } Am I incorrect to say that the bottom loop is better? Perhaps I am wrong. Also, what about after the second loop above, I set "something" back to null? Would that clear out some memory? In either case, what are some good memory management best practices I could follow that will help keep my memory usage low in my applications? Update: I appreciate everyones feedback so far. However, I was not really asking about the above loops (although by your advice I did go back to the first loop). I am trying to get some best practices that I can keep an eye out for. Something on the lines of "when you are done using a Collection, clear it out". I just really need to make sure not as much memory is being taken up by these applications.

    Read the article

  • Creating an MJPEG Viewer Iphone

    - by Tony
    Hey all, Im trying to make a MJPEG viewer in Objective C but I'm having a bunch of issues with it. First off, Im using AsyncSocket(http://code.google.com/p/cocoaasyncsocket/) which lets me connect to the host. Here's what I got so far NSLog(@"Ready"); asyncSocket = [[AsyncSocket alloc] initWithDelegate:self]; //http://kamera5.vfp.slu.se/axis-cgi/mjpg/video.cgi NSError *err = nil; if(![asyncSocket connectToHost:@"kamera5.vfp.slu.se" onPort:80 error:&err]) { NSLog(@"Error: %@", err); } then in the didConnectToHost method: - (void)onSocket:(AsyncSocket *)sock didConnectToHost:(NSString *)host port:(UInt16)port{ NSLog(@"Accepted client %@:%hu", host, port); NSString *urlString = [NSString stringWithFormat:@"http://kamera5.vfp.slu.se/axis-cgi/mjpg/video.cgi"]; NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:urlString]]; [request setHTTPMethod:@"GET"]; //set headers NSString *_host = [NSString stringWithFormat:host]; [request addValue:_host forHTTPHeaderField: @"Host"]; NSString *KeepAlive = [NSString stringWithFormat:@"300"]; [request addValue:KeepAlive forHTTPHeaderField: @"Keep-Alive"]; NSString *connection = [NSString stringWithFormat:@"keep-alive"]; [request addValue:connection forHTTPHeaderField: @"Connection"]; //get response NSHTTPURLResponse* urlResponse = nil; NSError *error = [[NSError alloc] init]; NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&urlResponse error:&error]; NSString *result = [[NSString alloc] initWithData:responseData encoding:NSUTF8StringEncoding]; NSLog(@"Response Code: %d", [urlResponse statusCode]); if ([urlResponse statusCode] >= 200 && [urlResponse statusCode] < 300) { NSLog(@"Response: %@", result); //here you get the response } } This calls the MJPEG stream, but it doesn't call it to get more data. What I think its doing is just loading the first chunk of data, then disconnecting. Am I doing this totally wrong or is there light at the end of this tunnel? Thanks!

    Read the article

  • Nice level not working on linux

    - by xioxox
    I have some highly floating point intensive processes doing very little I/O. One is called "xspec", which calculates a numerical model and returns a floating point result back to a master process every second (via stdout). It is niced at the 19 level. I have another simple process "cpufloattest" which just does numerical computations in a tight loop. It is not niced. I have a 4-core i7 system with hyperthreading disabled. I have started 4 of each type of process. Why is the Linux scheduler (Linux 3.4.2) not properly limiting the CPU time taken up by the niced processes? Cpu(s): 56.2%us, 1.0%sy, 41.8%ni, 0.0%id, 0.0%wa, 0.9%hi, 0.1%si, 0.0%st Mem: 12297620k total, 12147472k used, 150148k free, 831564k buffers Swap: 2104508k total, 71172k used, 2033336k free, 4753956k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32399 jss 20 0 44728 32m 772 R 62.7 0.3 4:17.93 cpufloattest 32400 jss 20 0 44728 32m 744 R 53.1 0.3 4:14.17 cpufloattest 32402 jss 20 0 44728 32m 744 R 51.1 0.3 4:14.09 cpufloattest 32398 jss 20 0 44728 32m 744 R 48.8 0.3 4:15.44 cpufloattest 3989 jss 39 19 1725m 690m 7744 R 44.1 5.8 1459:59 xspec 3981 jss 39 19 1725m 689m 7744 R 42.1 5.7 1459:34 xspec 3985 jss 39 19 1725m 689m 7744 R 42.1 5.7 1460:51 xspec 3993 jss 39 19 1725m 691m 7744 R 38.8 5.8 1458:24 xspec The scheduler does what I expect if I start 8 of the cpufloattest processes, with 4 of them niced (i.e. 4 with most of the CPU, and 4 with very little)

    Read the article

  • Is ASP.NET MVC destined to replace Webforms?

    - by johnny
    I found these questions, but a couple of them were a little old: http://stackoverflow.com/questions/191556/should-i-pursue-asp-net-webforms-or-asp-net-mvc http://stackoverflow.com/questions/88787/do-you-think-asp-net-mvc-will-compete-with-asp-net-webforms http://stackoverflow.com/questions/722637/asp-net-mvc-asp-net-webforms-why I do not believe these are duplicates and might be old enough that new light can be shed. If not please close this. I know that no one framework or language is necessarily the only tool for every job. But, do you see MVC eclipsing webforms or webforms going lower on the priority list for Microsoft? They will have to keep webforms for a long time because so many have invested in it, but they don't have to keep adding new functionality for it. I don't know if this is a good example, but it reminds me of web parts. I never saw much improvement in it from Microsoft. It works and I thought it was great until I started to really try and get a lot out of it. Then from what I could see it just wasn't being pursued by Microsoft that much, though it stayed in Visual Studio. Maybe that's a bad example; just what I remembered. EDIT: Also, if anyone has any statements from Microsoft on this subject it is appreciated. No offense to anyone. I was only hoping for something official.

    Read the article

  • 1GB cached memory - Do I need more RAM?

    - by Martin
    The server runs well but I wonder if I should get more RAM. I only have a few MB of "free" memory and 1.2GB of "cached" memory: free: total used free shared buffers cached Mem: 3945 3893 51 0 28 1216 -/+ buffers/cache: 2648 1296 Swap: 3895 857 3038 I learned that cached memory is used while it's free and not. Is the cached value an indicator for the need of more RAM? cat /proc/meminfo 1 day after flushing the cache: MemTotal: 4040048 kB MemFree: 32844 kB Buffers: 18956 kB Cached: 1249092 kB SwapCached: 161576 kB Active: 3611328 kB Inactive: 189104 kB SwapTotal: 3989496 kB SwapFree: 2894200 kB Dirty: 20520 kB Writeback: 0 kB AnonPages: 2523496 kB Mapped: 217744 kB Slab: 70940 kB SReclaimable: 36756 kB SUnreclaim: 34184 kB PageTables: 99648 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 6009520 kB Committed_AS: 6401716 kB VmallocTotal: 34359738367 kB VmallocUsed: 18852 kB VmallocChunk: 34359719439 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB top: top - 17:20:10 up 112 days, 3:06, 1 user, load average: 1.01, 1.62, 1.48 Tasks: 208 total, 1 running, 207 sleeping, 0 stopped, 0 zombie Cpu(s): 0.6%us, 0.6%sy, 0.0%ni, 97.5%id, 1.3%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 4040048k total, 3953108k used, 86940k free, 16348k buffers Swap: 3989496k total, 1095712k used, 2893784k free, 1235436k cached

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • High load without explanation

    - by Sebastian
    I have a very high load on my machine and don't know what is responsible or how to find out. On the machine runs a jboss appserver and mysql. Here is a top from the user at peak time: top - 16:23:01 up 101 days, 6:50, 1 user, load average: 23.42, 21.53, 24.73 Tasks: 9 total, 1 running, 8 sleeping, 0 stopped, 0 zombie Cpu(s): 17.2%us, 1.6%sy, 0.0%ni, 80.4%id, 0.1%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 16440784k total, 16263720k used, 177064k free, 151916k buffers Swap: 16780872k total, 30428k used, 16750444k free, 8963648k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27344 b 40 0 16.0g 6.5g 14m S 169 41.7 1184:09 java 6047 b 40 0 11484 1232 1228 S 0 0.0 0:00.01 mysqld_safe 6192 b 40 0 604m 182m 4696 S 0 1.1 93:30.40 mysqld 7948 b 40 0 84036 1968 1176 S 0 0.0 0:00.07 sshd 7949 b 40 0 14004 2900 1608 S 0 0.0 0:00.03 bash 7975 b 40 0 8604 1044 840 S 0 0.0 0:00.44 top The CPU usage of the java process is normal. The peaks only show up when i deployed a certain web application. Could the resulting network traffic boost the load in such way that i don't see it in top?

    Read the article

  • Jquery getJSON() doesn't work when trying to get data from java server on localhost

    - by bellesebastien
    The whole day yesterday I've been trying to solve this but it's proven to be very challenging for me. I'm trying to use this JS to get information from a java application I wrote. $(document).ready(function() { $.getJSON('http://localhost/custest?callback=?', function(json) { alert('OK'); $('.result').html(json.description); }); }); The Java application uses httpServer and is very basic. When I access the page 'http://localhost/custest?callback=?' with Firefox, the browser shows me the server is sending me json data and asks with what to open it with, but when I try it from a webpage using the JS above it doesn't work. The getJSON call is not successful, the alert("ok") doesn't popup at all. If it replace "http://localhost/custest?callback=?" in the JS with "http://twitter.com/users/usejquery.json?callback=?" everything works fine. An interesting thing is that if I send malformed JSON from my java server Firebug gives an error and tells me what is missing from the JSON so that mean the browser is receiving the JSON data, but when I send it correct a JSON string nothing happens, no errors, not even the alert() opens. I'm adding the headers in case you think these could be relevant. http://localhost/custest?callback=jsonp1274691110349 GET /custest?callback=jsonp1274691110349 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Transfer-Encoding: chunked Content-Type: application/json Thanks for your help

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • Unit testing JSON output module, best practices

    - by Banang
    I am currently working on a module that takes one of our business objects and returns a json representation of that object to the caller. Due to limitations in our environment I am unable to use any existing json writer, so I have written my own, which is then used by the business object writer to serialize my objects. The json writer is tested in a way similar to this @Test public void writeEmptyArrayTest() { String expected = "[ ]"; writer.array().endArray(); assertEquals(expected, writer.toString()); } which is only manageable because of the small output each instruction produces, even though I keep feeling there must be a better way. The problem I am now facing is writing tests for the object writer module, where the output is much larger and much less manageable. The risk of spelling mistakes in the expected strings mucking up my tests seem too great, and writing code in this fashion seems both silly and unmanageable in a long term perspective. I keep feeling like I want to write tests to ensure that my tests are behaving correctly, and this feeling worries me. Therefore, is there a better way of doing this? Surely there must be? Does anyone know of any good literature in regard to this specific case (doesn't have to be json, but you know what I mean)? Grateful for all help.

    Read the article

  • C# - adding new groups with items and subitems to a listview

    - by Nike
    Hello there. The following code adds a new item, and a new group with the text "Default". If i keep clicking the button, it will just keep adding new items to that particular group. ListViewItem item = new ListViewItem(""); item.SubItems.Add(""); csslistview.Items.Add(item); What i'm trying to do, is to add a new group and fill it with one empty item, aswell as one empty subitem. And when i click the button again, i want it to create a new group, and do the same thing. I have a textbox were the user has to fill in the name of the group, so there wont be any groups with the same name (hopefully). The following code, i think, creates a new group: ListViewGroup group = new ListViewGroup(newGroupName); group.Items.Add(newGroupName); csslistview.Groups.Add(group); but as empty groups aren't showed, i can't really verify that it actually creates new groups. Well, thanks in advance. -Nike

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • Handling Erlang inets http client errors

    - by Justin
    I have an Erlang app which makes a large number of http calls to external sites using inets, using the code below case http:request(get, {Url, []}, [{autoredirect, false}], []) of {ok, {{_, Code, _}, _, Body}}-> case Code of 200 -> HandlerFn(Body); _ -> {error, io:format("~s returned HTTP ~p", [Broker, Code])} end; Response -> %% block to handle unexpected responses from inets {error, io:format("~s returned ~p", [Broker, Response])} end. There is an explicit block to handle anything strange inets might return [Response]. Despite this, I still get what look like inets error reports dumped to the console [sample below]. What am I doing wrong here ? Do I need to configure some kind of inets error handler elsewhere ? Thanks. -- =ERROR REPORT==== 24-Apr-2010::06:49:47 === ** Generic server <0.6618.0 terminating ** Last message in was {connect_and_send, {request,#Ref<0.0.0.139358,<0.6613.0,0,http, {"***",80}, "****************", [],get, {http_request_h,undefined,"keep-alive", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,"news.bbc.co.uk", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,[],undefined,undefined,undefined, undefined,"0",undefined,undefined, undefined,undefined,undefined,undefined,[]}, {[],[]}, {http_options,"HTTP/1.1",infinity,false,[], undefined,false,infinity}, "*******************", [],none,[],1272088179114,undefined,undefined}} * When Server state == {state, {request,#Ref<0.0.0.139358,<0.6613.0,0,http, {"********",80}, "***************", [],get, {http_request_h,undefined,"keep-alive", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,undefined,"news.bbc.co.uk", undefined,undefined,undefined,undefined, undefined,undefined,undefined,undefined, undefined,[],undefined,undefined, undefined,undefined,"0",undefined, undefined,undefined,undefined,undefined, undefined,[]}, {[],[]}, {http_options,"HTTP/1.1",infinity,false,[], undefined,false,infinity}, "**********************", [],none,[],1272088179114,undefined,undefined}, undefined,undefined,undefined,undefined,undefined, {[],[]}, {[],[]}, undefined,[],nolimit,nolimit, {options, {undefined,[]}, 0,2,5,120000,2,disabled,false,inet,default, default,[]}, {timers,[],undefined}, httpc_manager,undefined} ** Reason for termination == ** {error,{connect_failed,{#Ref<0.0.0.139358,{error,nxdomain}}}} =ERROR REPORT==== 24-Apr-2010::06:49:47 === HTTPC-MANAGER handler (<0.6618.0, started) failed to connect and/or send request #Ref<0.0.0.139358 Result: {error,{connect_failed,{#Ref<0.0.0.139358,{error,nxdomain}}}}

    Read the article

  • Get "term is undefined” error when trying to assign arrayList to List component dataSource

    - by user1814467
    I'm creating an online game where people log in and then have the list of current players displayed. When the user enters a "room" it dispatches an SFSEvent which includes a Room object with the list of users as User objects in that room. As that event's callback function, I get the list of current users which is an Array, switch the View Stack child index, and then I wrap the user list array in an ArrayList before I assign it to the MXML Spark List component's dataSource. Here's my code: My Actionscript Code Section (PreGame.as): private function onRoomJoin(event:SFSEvent):void { const room:Room = this._sfs.getRoomByName(PREGAME_ROOM); this.selectedChild = waitingRoom; /** I know I should be using event listeners * but this is a temporary fix, otherwise * I keep getting null object errors * do to the li_users list not being * created in time for the dataProvider assignment **/ setTimeout(function ():void { const userList:ArrayList = new ArrayList(room.userList); this.li_users.dataProvider = userList; // This is where the error gets thrown },1000); } My MXML Code: <?xml version="1.0" encoding="utf-8"?> <mx:ViewStack xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" initialize="preGame_initializeHandler(event)" > <fx:Script source="PreGame.as"/> <s:NavigatorContent id="nc_loginScreen"> /** Login Screen Code **/ </s:NavigatorContent> /** Start of Waiting Room code **/ <s:NavigatorContent id="waitingRoom"> <s:Panel id="pn_users" width="400" height="400" title="Users"> /** This is the List in question **/ <s:List id="li_users" width="100%" height="100%"/> </s:Panel> </s:NavigatorContent> </mx:ViewStack> However, I keep getting this error: TypeError: Error #1010: A term is undefined and has no properties Any ideas what I'm doing wrong? The arrayList has data, so I know it's not empty/null.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >