Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 344/3203 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Hospital fined $1m for Patient Data Breach

    - by martin.abrahams
    As an illustration of the potential cost of accidental breaches, the US Dept of Health and Human Services recently fined a hospital $1m for losing documents relating to some of its patients. Allegedly, the documents were left on the subway by a hospital employee. For incidents in the UK, several local government bodies have been fined between £60k and £100k. Evidently, the watchdogs are taking an increasingly firm position.

    Read the article

  • Javascript storing data

    - by user985482
    Hi I am a beginner web developer and am trying to build the interface of a simple e-commerce site as a personal project.The site has multiple pages with checkboxes.When someone checks an element it retrives the price of the element and stores it in a variable.But when I go to the next page and click on new checkboxes products the variable automaticly resets to its original state.How can I save the value of that variable in Javascript? This is the code I've writen using sessionStorage but it still dosen't work when I move to next page the value is reseted. How can I wright this code so that i dosen't reset on each page change.All pages on my website use the same script. $(document).ready(function(){ var total = 0; $('input.check').click(function(){ if($(this).attr('checked')){ var check = parseInt($(this).parent().children('span').text().substr(1 , 3)); total+=check; sessionStorage.var_name=0 + total; alert(sessionStorage.var_name); }else{ var uncheck = parseInt($(this).parent().children('span').text().substr(1 , 3)); total-=uncheck; } })

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • My laptop suppose to have 6 GB ram but it's only 2 GB installed

    - by monablo
    My laptop is supposed to have 6 GB ram but it only detects 2 GB of it, so I don't know what to do. I searched the web for answer and what i get that there's 2 possible problem: 1- That 4 GB ram card is broken and not working 2- or the 2 ram card have a different path so the windows chose the path of the smaller ram card which would be the 2 GB ram I don't know what is the path in first place so I really don't know what to do. How would I troubleshoot, and work out why the 4 GB stick is not being detected? My laptop's specifications are as follows: Windows 7 Home premium 64- bit Operating system processor : intel (R) core(TM) i7 CPU Q740 @1.73 GHz 1.73GHz installed memory(RAM) : 2.00 GB (but it suppose to be 6 GB -_- ) My laptop is Dell and it's Model N5010

    Read the article

  • Windows XP: Slow start menu 'All Programs' response

    - by user17381
    When I click start, and then 'All Programs' (or select a sub menu of all programs) I get a grey menu which does not respond for about 5 seconds - after this it is ok. Any idea what is causing the menu to behave sluggishly? What can be done to fix this? Thanks Info Requested System Specs : Core2 T5500 @1.66GHz, 2GB Ram Windows version: XP Professional SP2 Happens Every time I click the menu (not just first time), has gradually been getting worse. Nothing too unusual at startup: ComodoFirewall, AVG AV, Truecrypt (only for small volume). AV Software: AVG.

    Read the article

  • Outlook data file cannot be accessed. [closed]

    - by Alex
    Possible Duplicate: Outlook data file cannot be accessed. Hello. I am using the beta 2010 Office Outlook. When I try receive or send, i get the following error "outlook data file cannot be accessed." Repair/reinstall office and try to use any different outlook data files is not take any effect.

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

  • Add & Show data in Java ArrayList [closed]

    - by Kaidul Islam Sazal
    I have a class inside a main class : static class Graph{ static int u, v, cost; } I have instantiated an arraylist of the class: static List<Graph> g = new ArrayList<Graph>(); And I insert several values into the arraylist like this: Scanner input = new Scanner(System.in); for (int i = 0; i < edge_no; i++) { Graph e = new Graph(); e.u = input.nextInt(); e.v = input.nextInt(); e.cost = input.nextInt(); g.add(e); } And I print it like this: for (int i = 0; i < edge_no; i++) { System.out.println(g.get(i).u + " " + g.get(i).v + " " + g.get(i).cost); } But the problem is that, when I print it, only the last value is shown all the time.It seems that, all the previous values are over-written with it. Input : 1 2 5 1 3 8 2 3 9 Output: 2 3 9 2 3 9 2 3 9 Expected output is just like the input.But I can't fix the problem as I am novice in java.

    Read the article

  • Are "skip deltas" unique to svn?

    - by echinodermata
    The good folks who created the SVN version control system use a structure they refer to as "skip deltas" to store the revision history of files internally. A revision is stored as a delta against an earlier revision. However, revision N is not necessarily stored as a delta against revision N-1, like this: 0 <- 1 <- 2 <- 3 <- 4 <- 5 <- 6 <- 7 <- 8 <- 9 Instead, revision N is stored as a delta against N-f(N), where f(N) is the greatest power of two that divides N: 0 <- 1 2 <- 3 4 <- 5 6 <- 7 0 <------ 2 4 <------ 6 0 <---------------- 4 0 <------------------------------------ 8 <- 9 (Superficially it looks like a skip list but really it's not that similar - for instance, skip deltas are not interested in supporting insertion in the middle of the list.) You can read more about it here. My question is: Do other systems use skip deltas? Were skip deltas known/used/published before SVN, or did the creators of SVN invent it themselves?

    Read the article

  • apache spawning too many processes despite maxclient and other constraints

    - by Josh Nankin
    Here are my MPM constraints: StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 2000 However despite this, I have over 20 apache processes running currently, and in the past hour or two there have been as many as 40-50. Shouldn't the MaxClient and MaxSpareServers keep the number of processes under control (i.e. about 10)? Is there something I'm missing?

    Read the article

  • Apache Balancing by source IP

    - by Daniel
    I am using Apache's Proxy Balancer to balance one sub domain (e.g. subdomain.domain.com) to an application which is located on 2 servers. Here an extract from my Apache configuration file: <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://cluster1> BalancerMember http://server1:28081 route=w1 BalancerMember http://server2:28082 route=w2 </Proxy> ProxyPass /path balancer://cluster1/path ProxyPassReverse /path balancer://cluster1/path My question is, if it's possible to decide with the source IP-address which BalancerMember should be used for the request? To e.g. Requests from 1.2.3.4 to Member 1?

    Read the article

  • What is bottleneck of my Apache server ?

    - by rrh
    $netstat -anp | grep :80 | grep TIME_WAIT | wc -l 840 $netstat -anp |grep :80 | grep ESTABLISHED | wc -l 50 memory usage : 850MB / 1000MB apache2.conf contains.. <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Are there any configuration changes that can help me or its just my RAM the bottleneck here? Urgent help needed..!!

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • RAID setup for maximizing data retention and read speed

    - by cat pants
    My goals are simple: maximize data retention safety, and maximize read speeds. My first instinct is to do a a three drive software RAID 1. I have only used fakeraid RAID 1 in the past and it was terrible (would have led to data loss actually if it weren't for backups) Would you say software raid 1 or a cheap actual hardware raid card? OS will be linux. Could I start with a two drive raid 1 and add a third drive on the fly? Can I hot swap? Can I pull one of the drives and throw it into a new machine and be able to read all the data? I do not want a situation where I have a raid card fail and have to try and find the same chipset in order to read my data (which i am assuming can happen) Please clarify any points on which it sounds like I have no idea what I am talking about, as I am admittedly inexperienced here. (My hardest lesson was fakeraid lol) Thanks!

    Read the article

  • Ngingx max worker_connections and access log

    - by MotoTribe
    I'm troubleshoot an issue with my site. I'm seeing in the ngingx-error.log that the max worker_connection limit has been reached when the site went down. I'm not seeing an increase of requests during that time in the ngingx-access.log. Does that mean the mysql database had a bottleneck at that time that caused the requests to queue up? Or would it not log any requests that where made after the max worker_connection limit has been reached?

    Read the article

  • Data structures for a 3D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

  • CMS and Databases vs. DIY

    - by hozza
    I have been programming for many years now, primarily in PHP and the like and would consider myself an intermediate programmer. Some of my online projects have now gone global and very widely used, i am now in deep thought about scalability etc. All of my systems so far are written in PHP, no known database structure such as MySQL etc. Instead our databases use an 'operating system style' method of storing information, files and folders if you will. We also do not use any outside/third-party software or CMS, so far this has work out extremely well. Most people, when they hear about the way we do things, criticize and say that is an idiotic idea but normally after seeing our systems in more dept are converted to our way of doing things. Is it really that bad to not use a standard databasing systems and only using the one (slightly heavier than others) language of PHP? How well on the face of it will this kind of setup scale? N.B. Our systems include things such as account and user management, documentation development and task/project managing.

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • SQL Management Studio external database only

    - by Robuust
    I'm trying to speed up my PC, and I figured out that a full version of SQL Management Studio 2012 is installed including localhost server. I only need to connect to remote hosts, so running a local server by default should be disabled. Is there an easy way to disable certain parts so I can speed up my PC and booting time? Thanks in advance. I really have no clue what processes I can disable without ruining everything.

    Read the article

  • Chapter 7–Enforced Data Protection

    - by drsql
    As the book progresses, I find myself veering from the original stated outline quite a bit, because as I teach about this more (and I am teaching a daylong db design class in August at http://www.sqlsolstice.com/ … shameless plug, but it is on topic :) I start to find that a given order works better. Originally I had slated myself to talk more about modeling here for three chapters, then get back to the more implementation topics to finish out the book, but now I am going to keep plugging through...(read more)

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >