Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 345/3203 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Maximise network transfer speed of various applications

    - by Alex
    When using nc, scp, wget to transfer files between 2 machines on a dedicated 2Mbps link, I get speeds between 0.5 and 1 Mbps. However, when I use iperf -c 10.0.1.4 -t 20 -P 12 (for example) I can maximise the speed of the link (getting stable 2Mbps). Is there a way to make single stream transfers (such as those done by scp) to utilise all/most of the link? Some kind of tcp settings, or iptables...?

    Read the article

  • Script to calculate the Median value for SQL Server data

    The standard SQL language has a number of aggregate functions like: SUM, MIN, MAX, AVG, but a common statistics function that SQL Server does not have is a built-in aggregate function for median. The median is the value that falls in the middle of a sorted resultset with equal parts that are smaller and equal parts that are greater. Since there is no built-in implementation for the median, the following is a simple solution I put together to find the median. Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • How dow I remove 1.000.000 directories?

    - by harper
    I found that in a directory more than 1.000.000 subdirectories has been created due to a bug. I want to remove all these directories, let's say in the directory WebsiteCache. My first approach was to use the command line tool: cd WebsiteCache rmdir /Q /S . This will remove all subdirectories except the directory WebsiteCache itself, since it is the current working directory. I noticed after two hours that the directoriws starting with A-H have been removed. Why does rmdir removes the directories in alphabetical order? It must take additional effort to do this ordered. What is the fastest way to delete such an amount of directories?

    Read the article

  • troubleshooting really slow login on a (linux) machine

    - by Peeter Joot
    Within the last couple of weeks, any attempt to login to a specific linux server has gotten really slow. Once I've logged in, things appear to run without significant delay, but some other login like activities (like starting a new screen session) are slow. The machine's been rebooted a couple of times recently and that hasn't helped. , and it doesn't appear to be $PATH search (where $PATH can sometimes include bad NFS mounts), which I've seen historically in our environment. I've also tried completely removing my .profile/.bash*/... type of init files to rule out anything bad there. I also see slow login for at least one other userid on the system. One thing I've noticed is the following message when trying to exit from a screen terminal: Utmp slot not found -> not removed and am wondering if this is related (having a vague recollection that Utmp has something to do with login). Any idea what that message means, or how to fix it, and if it would be related? Failing that, what sort of problem determination tools are available to investigate what is slowing down this login process?

    Read the article

  • Best use of new express card on Windows

    - by jckdnk111
    I just bought a 48GB SSD express card for my laptop and I am trying to decide how best to use it. I will be running some sort of virtualization (prob VirtualBox) to test / learn Windows Server administration. I am running Windows 7 Ultimate 64 bit. I have 4GB of RAM and a 7200 RPM SATA hard disk. The express card will read at 115MB/s and write at 65MB/s. So how best to use this new disk? Readyboost, relocate pagefile, store VM disks, mix / match?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • What are your tricks for optimizing your Subversion configuration?

    - by Scott Markwell
    For a Linux or Windows system, what tricks do you do to optimize your Subversion server? The following are my current tricks for a Linux system serving over Apache with HTTPS and backed by Active Directory using LDAP authentication. Enabling KeepAlive on Apache Disable SVNPathAuthz Increase LDAP Cache Using the FSFS storage method instead of BDB Feel free to call this into question. I don't have hard proof that FSFS out performs BDB, only lots of tribal knowledge and hearsay.

    Read the article

  • How to input data into user defined variables into MySql query

    - by user292791
    Simple Shell script echo "Enter 1 for month of March" echo "Enter 2 for month of April" echo "Enter 3 for month of May" read Month case "$Month" in 1) echo "enter establishment name" read a; mysql -u root -p $a < "March.sql";; 2) echo "enter establishment name" read b; mysql -u root -p $b < "April.sql";; 3) echo "enter establishment name" read c; mysql -u root -p $c < "May.sql";; esac done In this i have three other query files March.sql, April.sql, May.sql. i'm linking this in shell script . Example of .sql file: SELECT DISTINCT substr( a.case_no, 3, 2 ), b.case_type, b.type_name, a.case_no into outfile '/tmp/April.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\r\n' FROM Civil_t AS a, Case_type_t AS b, disposal_proc AS c WHERE substr( a.case_no, 3, 2 ) = b.case_type AND a.date_of_decision BETWEEN '2014-04-01' AND '2014-04-30' AND a.case_no = c.case_no AND a.court_no =1; I have to alter the .sql script every time. Is there any method to read the variables from shell script and use it in mysql. For example:- echo "enter date" read a #input date Now i have read a "date" and i want to use it in March.sql query in where clause. Is there is any method of using this variable in .sql query.

    Read the article

  • MicrosoftOnline Migration - Why do I have to wait several minutes before I can click "Finish?"

    - by Giffyguy
    I'm using the MicrosoftOnline Internet E-Mail Mailbox Migration Wizard. I'm moving my email from several GMail accounts to my Microsoft Exchange Online mailboxes. Every time I migrate all or part of a GMail mailbox, I have to wait about five minutes after it completes migration before the Finish button becomes available. What is going on during this time? Is it something I am doing wrong, or is the system just slow?

    Read the article

  • SQL Management Studio external database only

    - by Robuust
    I'm trying to speed up my PC, and I figured out that a full version of SQL Management Studio 2012 is installed including localhost server. I only need to connect to remote hosts, so running a local server by default should be disabled. Is there an easy way to disable certain parts so I can speed up my PC and booting time? Thanks in advance. I really have no clue what processes I can disable without ruining everything.

    Read the article

  • RAID setup for maximizing data retention and read speed

    - by cat pants
    My goals are simple: maximize data retention safety, and maximize read speeds. My first instinct is to do a a three drive software RAID 1. I have only used fakeraid RAID 1 in the past and it was terrible (would have led to data loss actually if it weren't for backups) Would you say software raid 1 or a cheap actual hardware raid card? OS will be linux. Could I start with a two drive raid 1 and add a third drive on the fly? Can I hot swap? Can I pull one of the drives and throw it into a new machine and be able to read all the data? I do not want a situation where I have a raid card fail and have to try and find the same chipset in order to read my data (which i am assuming can happen) Please clarify any points on which it sounds like I have no idea what I am talking about, as I am admittedly inexperienced here. (My hardest lesson was fakeraid lol) Thanks!

    Read the article

  • In need of assistance for recovering a lost partition

    - by Tek
    The program that has worked for the most part is Active@ Partition Recovery. I'm so close but yet so far from recovering my data. Okay so here's what happens. In the following screenshot (blanked out a folder and filename with profanity in case some of you guys are at work :P), it detects the partition I accidentally deleted with ALL 100% of my data listed. Of course, I didn't write ANY data to that drive after I did this. But when I click "recover" and finishes the recovery process, in Windows I click on the partition that was just was just recovered... It's EMPTY. The program seems to be able to see my lost files, but when I recover the partition windows doesn't seem to think the same =( Things I tried: I tried running a chkdsk /r /f after I recovered the partition, apparently it couldn't find any errors. Tried using other software like TestDisk to recover the partition, but they (all) act similar to Windows in that it detects the (missing) partition but when I browse it there's no files. The partition is there along with all the file and data information. The sector information is also in the screenshot, is there any way I can use this to my advantage in recovering my data? Other information: Dualboot: Win8 / Ubuntu 12.10 x64 1TB Internal desktop drive, GPT Layout, NTFS formatted drive, 64K allocation size.

    Read the article

  • SQL Server - VMWare install - Utilize more RAM

    - by alex
    We have a SQL server machine - It’s a VMWare image (running on ESXi hardware etc..) It has windows 2008 x64 standard The SQL install is SQL 2008 standard The virtual machine has 12gb of RAM, and 4 virtual CPU The box is suffering from near 100% CPU a lot of the time I enabled the AWE- but SQL server only seems to use 3-4gb of RAM Is there a way of making it use more available ram more effectively? cache results for example..?

    Read the article

  • Javascript storing data

    - by user985482
    Hi I am a beginner web developer and am trying to build the interface of a simple e-commerce site as a personal project.The site has multiple pages with checkboxes.When someone checks an element it retrives the price of the element and stores it in a variable.But when I go to the next page and click on new checkboxes products the variable automaticly resets to its original state.How can I save the value of that variable in Javascript? This is the code I've writen using sessionStorage but it still dosen't work when I move to next page the value is reseted. How can I wright this code so that i dosen't reset on each page change.All pages on my website use the same script. $(document).ready(function(){ var total = 0; $('input.check').click(function(){ if($(this).attr('checked')){ var check = parseInt($(this).parent().children('span').text().substr(1 , 3)); total+=check; sessionStorage.var_name=0 + total; alert(sessionStorage.var_name); }else{ var uncheck = parseInt($(this).parent().children('span').text().substr(1 , 3)); total-=uncheck; } })

    Read the article

  • How do you disable the magnifier in Windows Vista?

    - by PhantomDrummer
    Every so often while I'm working, if I accidentally jolt the mouse, the magnifier starts. I'm fairly sure the cause is that some comination of mouse keys is supposed to start the magnifier, and I occasionally hit the right keys accidentally. However, I've never been able to reproduce the behaviour deliberately so I don't know which combination. This is very annoying since switching the magnifier off is non-trivial (control panel, or run dialog, which are hard to use when the magnifier keeps - ummm - windowing and magnifying the bit of screen you're about to click on :-) ). So I'm wondering if there's any way to disable the magnifier completely, so that it doesn't start when you do whatever it is you'd normally do to the mouse to start it? Anyone know?

    Read the article

  • How many megabytes per second may I expect from a gigabit USB 2.0 network card under linux?

    - by Nakedible
    I'm interested in the actual real-word throughput attainable with an external 1000BaseT USB 2.0 network card under Linux. I have been able to attain 90 megabytes per second on a PCI-E interface, but the USB 2.0 bus has a theoretical limit of 480Mbit/s, and in practice less than 40 megabytes per second. Is the actual throughput attainable with such a card under linux 40, 30, 20, or even as low as 10 megabytes per second, eg. no better than a normal 100BaseT network card?

    Read the article

  • Data structures for a 3D array

    - by Smallbro
    Currently I've been using a 3D array for my tiles in a 2D world but the 3D side comes in when moving down into caves and whatnot. Now this is not memory efficient and I switched over to a 2D array and can now have much larger maps. The only issue I'm having now is that it seems that my tiles cannot occupy the same space as a tile on the same z level. My current structure means that each block has its own z variable. This is what it used to look like: map.blockData[x][y][z] = new Block(); however now it works like this map.blockData[x][y] = new Block(z); I'm not sure why but if I decide to use the same space on say the floor below it wont allow me to. Does anyone have any ideas on how I can add a z-axis to my 2D array? I'm using java but I reckon the concept carries across different languages.

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • MBR seems to be gone

    - by bobobobo
    So, horror story for everyone. I bought two spanking new HDD's. MM!! Gbitage. I removed all my old HDD's, physically labelled them, and was preparing to install all new HDD's (fresh sys install included!) To make sure what HDD was what, I popped each OLD HDD (data filleD!) into a Thermaltake Blacx toaster.. surprisingly BOTH couldn't be read. I didn't have static on my hands! I'm certain of it. I touched metal, touched wood, before beginning this all. Thinking that was strage, I hauled up the new sys, installed Win XP (of course!) on the new HDD, and now the two OLD HDD's (data filled!) that were entered into the toaster cannot be read. And they had tons of data on them. I read about MBR's being nuked and it sounds like that is what it is. But I'm at a loss what to do. There are so many MBR recovery programs out there, I kind of feel overwhelmed. I don't want to lose my data by just pikcing one, yet it seems so close within reach, I'm not panicking anymore.. Anybody have a play by play that I could follow? I just don't want to spend $900 on data recovery centers if I can do this myself..

    Read the article

  • 5 Reasons You Must Start Capturing Baseline Data

    It is widely acknowledged within the SQL Server community that baselines represent valuable information that DBAs should capture. Unfortunately, very few companies manage to log and report on this information, and DBAs are then forced to troubleshoot from the hip and scramble to find evidence to prove that the database is not the problem. This article will make a compelling argument for why DBAs must start capturing baseline information, and will create a roadmap for subsequent posts.

    Read the article

  • Understanding Keyword Research Data

    Whenever any company goes ahead and decides to launch a campaign to increase their business sales and raise their profits over the internet then their first concern must be increased traffic. Well, this is the prime goal of every internet venture in a general manner. What is the purpose of having any content on the web when there are no visitors to that website and there is no body actually visiting your sites.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >