Search Results

Search found 941 results on 38 pages for 'fastest'.

Page 11/38 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • What is the fastest way to reload history commands begin with certain characters in linux?

    - by gerry
    In Dos we can input the first several characters to filter command history and find proper one rapidly. But how to do the same thing in Linux ? for example when I am testing a local server: cd sudo /etc/init.d/vsftpd start wget ... ls emacs ... sudo /etc/init.d/vsftpd restart sudo /etc/init.d/vsftpd stop ... In Dos you can easily type sudo and switch among the three commands beginning with it using arrow keys. But in Linux, is below command the best we can do ? historty | grep sudo I don't like it, because history can easily become a mess, and it also need mouse action.

    Read the article

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • What is the simplest and fastest way to transfer large file through a Windows network?

    - by Sake
    I have a Window Server 2000 machine running MS SQL Server that stores over 20GB of data. The database is backed-up every day to the second harddrive. I want to transfer those backup files to another computer to build another test server and for recovery practicing. (the backup never actually got restored for almost 5 years. Don't tell my boss about that!) I have trouble transfering that huge file through the network. I've tried plain network copy, apache download, and ftp. Any method I tried end up failing when the amount of data transfered reach 2GB. The last time that I successfully transfered the file, it was through a usb attached external harddrive. But I want to perform this task routinely and preferably automatically. Wonder what is the most pragmatic approach for this situation ?

    Read the article

  • What settings to use for fastest reloading of a MySQL backup?

    - by Alex R
    I have a MySQL database which dumps to a 3.5 GB backup (mysqldump) in about 10 minutes. But reloading this backup on a standby / test server takes upwards of 12 hours. What are some settings that would maximize reloading performance? The most promising appear to be innodb_buffer_pool_size, innodb_additional_mem_pool_size, and innodb_log_buffer_size... but I'm reaching the limits of my trial-and-error approach. Which of these settings "should" be the most important? Through trial-and-error I was not able to get more than 70% CPU utilization and 63% memory utilization. I'd like both at 100% during a reload. All tables are InnoDB.

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • What is the fastest CPU my laptop can support?

    - by Dave
    I have a Dell Latitude D830 laptop and would like to speed up compile times on it. I have confirmed that it is, indeed, the processor time that is the bottleneck. How can I tell what processors are compatible with the motherboard to pick the best available? I run dual boot Ubuntu Maverick and Windows 7. lshw tells me that my motherboard is OHN338 from Dell, Inc. If anyone has a generic solution, i.e. "For motherboard X, here is how you find out what processors are supported," that would be make this question much more useful to future visitors. But if you also know of a way to find out specific to my model, that would be great as well.

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • What is fastest way to backup a disk image over LAN?

    - by David Balažic
    Sometimes I boot sysrescd or a similar live linux on a PC to backup the hardrive over local network to my server. I noticed many times, that the transfer speed is not optimal (slower than HDD and network speed). Any rules of thumb what to do and what to avoid? What I typically do is something like: dd bs=16M if=/dev/sda | nc ... # on client nc ... | dd bs=16M of=/destination/disk/backup1 # on server I also "throw" in lzop (other are way too slow) and sometimes on the fly md5sum calculation (both of uncompressed and compress source). I try to add (m)buffer (or other alternatives) to improve throughput (and get a progress indicator). I noticed that even with enough free CPU, adding commands to the pipeline slows things down. Typically the destination is on a NTFS volume (accessed via ntfs-3g, with the _big_writes_ option).

    Read the article

  • What is the fastest way to type an en dash in Windows 7?

    - by Geoff Olynyk
    Simple question: What's the quickest way to get an en dash (–, Unicode U+2013 EN DASH) in Windows? Note that this question is for all programs, not just Microsoft Word. Even better if it can be copied to the clipboard as a pure Unicode character, with no formatting information (typeface, etc.) so that when I paste it into Word or Excel or other rich text editors, it doesn't carry its format with it.

    Read the article

  • How can I upgrade from Windows 8 x32 to x64 fastest & easiest?

    - by parkviewK
    I was upgrading the hardware in a family member's computer and purchased the Windows 8 upgrade for $40 from their website. Not thinking, after I installed it I realized I now have 32bit Windows on this 64bit capable machine. Did some research and it appears that the upgrade assistant just gives you whatever architecture you were uprgading from (windows 7 32bit). Question is, what is my best option here? I have already purchased a key, will this work on x64? I'm considering installing Windows 8 Preview x64 and upgrading from there, but not sure if that will work or if it will make me purchase it again.

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • What is the fastest way to send 100,000 HTTP requests in Python?

    - by Igor G.
    Hello, I am opening a file which has 100,000 url's. I need to send an http request to each url and print the status code. I am using Python 2.6, and so far looked at the many confusing ways Python implements threading/concurrency. I have even looked at the python concurrence library, but cannot figure out how to write this program correctly. Has anyone come across a similar problem? I guess generally I need to know how to perform thousands of tasks in Python as fast as possible - I suppose that means 'concurrently'. Thank you, Igor

    Read the article

  • What is the fastest way to unzip textfiles in Matlab during a function?

    - by Paul
    Hello all, I would like to scan text of textfiles in Matlab with the textscan function. Before I can open the textfile with fid = fopen('C:\path'), I need to unzip the files first. The files have the extension: *.gz There are thousands of files which I need to analyze and high performance is important. I have two ideas: (1) Use an external program an call it from the command line in Matlab (2) Use a Matlab 'zip'toolbox. I have heard of gunzip, but don't know about its performance. Does anyone knows a way to unzip these files as quick as possible from within Matlab? Thanks!

    Read the article

  • RSS parsing last build Date. Fastest way to do so please.

    - by Paul
    Dim myRequest As System.Net.WebRequest = System.Net.WebRequest.Create(url) Dim myResponse As System.Net.WebResponse = myRequest.GetResponse() Dim rssStream As System.IO.Stream = myResponse.GetResponseStream() Dim rssDoc As New System.Xml.XmlDocument() Try rssDoc.Load(rssStream) Catch nosupport As NotSupportedException Throw nosupport End Try Dim rssItems As System.Xml.XmlNodeList = rssDoc.SelectNodes("rss/channel") 'For i As Integer = 0 To rssItems.Count - 1 Dim rssDetail As System.Xml.XmlNode rssDetail = rssItems.Item(0).SelectSingleNode("lastBuildDate") Folks this is what I'm using to parse an RSS feed for the last updated time. Is there a quicker way? Speed seems to be a bit slow on it as it pulls down the entire feed before parsing.

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • What's the fastest way to get CRUD over CGI on a database handle in Perl?

    - by mithaldu
    TL;DR: Want to write CGI::CRUD::Simple (a minimalist interface module for CGI::CRUD), but I want to check first if i overlooked a module that already does that. I usually work with applications that don't have the niceties of having frameworks and such already in place. However, a while ago i found myself in a situation where i was asking myself: "Self, i have a DBI database handle and a CGI query object, isn't there a module somewhere that can use this to give me some CRUD so i can move on and work on other things instead of spending hours writing an interface?" A quick survey on CPAN gave me: CGI::Crud Catalyst::Plugin::CRUD Gantry::Plugins::CRUD Jifty::View::Declare::CRUD CatalystX::CRUD Catalyst::Controller::CRUD CatalystX::CRUD::REST Catalyst::Enzyme Now, I didn't go particularly in-depth when looking at these modules, but, safe the first one, they all seem to require the presence of some sort of framework. Please tell me if i was wrong and i can just plug any of those into a barebones CGI script. CGI::CRUD seemed to do exactly what i wanted, although it did insist on being used through a rather old and C-like script that must be acquired on a different site and then prodded in various ways and manners to produce something useful. I went with that and found that it works pretty neat and that it should be rather easy to write a simple and easy-to-use module that provides a very basic [dbh, cgi IN]-[html OUT] interface to it. However, as my previous survey was rather short and i may have been hasty in dismissing modules or missed others, i find myself wondering whether that would only be duplication of work already done. As such i ponder the question in the title. PS: I tend to be too short in some of my explanations and make too many assumptions that others think about things similarly as me, resulting in leaving out critical details. If you find yourself wondering just what exactly I am thinking about when i say CRUD, please poke me in comments and I'll amend the question.

    Read the article

  • What is the fastest way to scale and display an image in Python?

    - by Knut Eldhuset
    I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred. These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.

    Read the article

  • What is the fastest, most efficient way to get up to speed on a new technology?

    - by SLC
    My current job involves working with a huge number of technologies, most of which are very niche and unheard of. In some cases I have to write something about the technology, or with the technology, such as some lessons, examples, or tutorials, on behalf of the developer of that technology or someone that is backing it. When I get told to learn about a new technology, my first port of call is to check our internal library, and then look on amazon for a book on the subject. Failing that, or if the project is too small to warrant a purchase, I hit up google and youtube. However the results of randomly googling what I want to learn are hit and miss. Some days, I can find everything I want to know in a series of lessons or videos, and it's no problem. Other times, I can find almost nothing, and I really have to piece together things from sites. The result is that there are various resources out there, videos, interactive lessons, tutorials, books etc. but when I need to learn something fast, I often don't know the best way to go about it. It's not about fun, because I don't always have the luxury of working my way through a 600 page textbook named "A Complete Guide To Technology X", I have to deliver results quickly. One of the examples I'd like to use is ASP.NET MVC 2 which is something I have been told to learn. I grabbed a book on MVC 1 to refresh my knowledge, but googling it does't produce much useful information. I've seen a ton of ScottGu's tutorials on it, but they are mostly feature presentations, and some date back almost a year. The same applies to channel 9 and there are no books out yet on amazon. My question therefore has two parts, the first asks, "Where are the best places to look to get the information needed to learn a new technology?" and the second asks "What is the most efficient way to use such resources to learn a new technology?"

    Read the article

  • What is the fastest collection in c# to implement a prioritizing queue?

    - by Nathan Smith
    I need to implement a queue for messages on a game server so it needs to as fast as possible. The queue will have a maxiumem size. I need to prioritize messages once the queue is full by working backwards and removing a lower priority message (if one exists) before adding the new message. The appliation is asynchronous so access to the queue needs to be locked. I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long. Heres the basic code I have at the moment: public class ActionQueue { private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>(); private int _maxSize; /// <summary> /// Initializes a new instance of the ActionQueue class. /// </summary> public ActionQueue(int maxSize) { _maxSize = maxSize; } public int Count { get { return _actions.Count; } } public void Enqueue(ClientAction action) { lock (_actions) { if (Count < _maxSize) _actions.AddLast(action); else { LinkedListNode<ClientAction> node = _actions.Last; while (node != null) { if (node.Value.Priority < action.Priority) { _actions.Remove(node); _actions.AddLast(action); break; } } } } } public ClientAction Dequeue() { ClientAction action = null; lock (_actions) { action = _actions.First.Value; _actions.RemoveFirst(); } return action; } }

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >