Search Results

Search found 72103 results on 2885 pages for 'file storage'.

Page 16/2885 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • ??????Sun ZFS Storage Appliance?????????????????·?????????

    - by Norihito Yachita
    ??????????????·????????Sun ZFS Storage 7320 Appliance??????????????????????????????????????IaaS(Infrastructure as a Service)???????????????????????????????????????????????????????? ???????????11?15?????????????4,000?????????????????????????????????????????????????????????·??????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????InfiniBand???????????????????Sun ZFS Storage 7320 Appliance?????2011?5??????????6????????????????????????Sun ZFS Storage 7320 Appliance???????·???????

    Read the article

  • SAS Expanders vs Direct Attached (SAS)?

    - by jemmille
    I have a storage unit with 2 backplanes. One backplane holds 24 disks, one backplane holds 12 disks. Each backplane is independently connected to a SFF-8087 port (4 channel/12Gbit) to the raid card. Here is where my question really comes in. Can or how easily can a backplane be overloaded? All the disks in the machine are WD RE4 WD1003FBYX (black) drives that have average writes at 115MB/sec and average read of 125 MB/sec I know things would vary based on the raid or filesystem we put on top of that but it seems to be that a 24 disk backplane with only one SFF-8087 connector should be able to overload the bus to a point that might actually slow it down? Based on my math, if I had a RAID0 across all 24 disks and asked for a large file, I should, in theory should get 24*115 MB/sec wich translates to 22.08 GBit/sec of total throughput. Either I'm confused or this backplane is horribly designed, at least in a perfomance environment. I'm looking at switching to a model where each drive has it's own channel from the backplane (and new HBA's or raid card). EDIT: more details We have used both pure linux (centos), open solaris, software raid, hardware raid, EXT3/4, ZFS. Here are some examples using bonnie++ 4 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 194MB/s 19% 92MB/s 11% 200MB/s 8% 310/sec 194MB/s 19% 93MB/s 11% 201MB/s 8% 312/sec --------- ---- --------- ---- --------- ---- --------- 389MB/s 19% 186MB/s 11% 402MB/s 8% 311/sec 8 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 324MB/s 32% 164MB/s 19% 346MB/s 13% 466/sec 324MB/s 32% 164MB/s 19% 348MB/s 14% 465/sec --------- ---- --------- ---- --------- ---- --------- 648MB/s 32% 328MB/s 19% 694MB/s 13% 465/sec 12 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 377MB/s 38% 191MB/s 22% 429MB/s 17% 537/sec 376MB/s 38% 191MB/s 22% 427MB/s 17% 546/sec --------- ---- --------- ---- --------- ---- --------- 753MB/s 38% 382MB/s 22% 857MB/s 17% 541/sec Now 16 Disk RAID-0, it's gets interesting WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 359MB/s 34% 186MB/s 22% 407MB/s 18% 1397/sec 358MB/s 33% 186MB/s 22% 407MB/s 18% 1340/sec --------- ---- --------- ---- --------- ---- --------- 717MB/s 33% 373MB/s 22% 814MB/s 18% 1368/sec 20 Disk RAID-0, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 371MB/s 37% 188MB/s 22% 450MB/s 19% 775/sec 370MB/s 37% 188MB/s 22% 447MB/s 19% 797/sec --------- ---- --------- ---- --------- ---- --------- 741MB/s 37% 376MB/s 22% 898MB/s 19% 786/sec 24 Disk RAID-1, ZFS WRITE CPU RE-WRITE CPU READ CPU RND-SEEKS 347MB/s 34% 193MB/s 22% 447MB/s 19% 907/sec 347MB/s 34% 192MB/s 23% 446MB/s 19% 933/sec --------- ---- --------- ---- --------- ---- --------- 694MB/s 34% 386MB/s 22% 894MB/s 19% 920/sec 28 Disk RAID-0, ZFS 32 Disk RAID-0, ZFS 36 Disk RAID-0, ZFS More details: Here is the exact unit: http://www.supermicro.com/products/chassis/4U/847/SC847E1-R1400U.cfm

    Read the article

  • Search a string in a file and write the matched lines to another file in Java

    - by Geeta
    For searching a string in a file and writing the lines with matched string to another file it takes 15 - 20 mins for a single zip file of 70MB(compressed state). Is there any ways to minimise it. my source code: getting Zip file entries zipFile = new ZipFile(source_file_name); entries = zipFile.entries(); while (entries.hasMoreElements()) { ZipEntry entry = (ZipEntry)entries.nextElement(); if (entry.isDirectory()) { continue; } searchString(Thread.currentThread(),entry.getName(), new BufferedInputStream (zipFile.getInputStream(entry)), Out_File, search_string, stats); } zipFile.close(); Searching String public void searchString(Thread CThread, String Source_File, BufferedInputStream in, File outfile, String search, String stats) throws IOException { int count = 0; int countw = 0; int countl = 0; String s; String[] str; BufferedReader br2 = new BufferedReader(new InputStreamReader(in)); System.out.println(CThread.currentThread()); while ((s = br2.readLine()) != null) { str = s.split(search); count = str.length - 1; countw += count; //word count if (s.contains(search)) { countl++; //line count WriteFile(CThread,s, outfile.toString(), search); } } br2.close(); in.close(); } -------------------------------------------------------------------------------- public void WriteFile(Thread CThread,String line, String out, String search) throws IOException { BufferedWriter bufferedWriter = null; System.out.println("writre thread"+CThread.currentThread()); bufferedWriter = new BufferedWriter(new FileWriter(out, true)); bufferedWriter.write(line); bufferedWriter.newLine(); bufferedWriter.flush(); } Please help me. Its really taking 40 mins for 10 files using threads and 15 - 20 mins for a single file of 70MB after being compressed. Any ways to minimise the time.

    Read the article

  • best way to output a full precision double into a text file

    - by flevine100
    Hi, I need to use an existing text file to store some very precise values. When read back in, the numbers essentially need to be exactly equivalent to the ones that were originally written. Now, a normal person would use a binary file... for a number of reasons, that's not possible in this case. So... do any of you have a good way of encoding a double as a string of characters (aside from increasing the precision). My first thought was to cast the double to a char[] and write out the chars. I don't think that's going to work because some of the characters are not visible, produce sounds, and even terminate strings ('\0'... I'm talkin to you!) Thoughts?

    Read the article

  • File upload control - Select file is lossed when 2nd control is initiatied

    - by Barry
    Our problem/question revolves around an upload control that loses the selected file (goes blank) when a postback control is used (in this case, the dropdown list posts). Any insight into what we are doing wrong or how we can fix this? Below is our code and a summary of the problem. Any help would be greatly appreciated. Thank you! <asp:updatepanel id="UpdatePanel1" runat="server"> <ContentTemplate> <div class="row"> <asp:DropDownList runat="server" AutoPostBack="true" ID="CategorySelection" OnSelectedIndexChanged="CategorySelection_IndexChanged" CssClass="drop-down-list" /> </div> <div id="SubCategory" class="row" runat="server" visible="false"> <asp:DropDownList runat="server" ID="SubCategorySelection" CssClass="drop-down-list" /> </div> <div class="row"> <asp:FileUpload runat="server" ID="FileUpload" CssClass="file-upload" /> </div> <div class="row"> <asp:Button ID="submit" runat="server" Text="Submit" CssClass="button" OnClick="submit_ButtonClick" /> </div> </ContentTemplate> <Triggers> <asp:PostBackTrigger ControlID="submit" /> </Triggers> </asp:updatepanel> In this form we have 2 DropDownList, 1 FileUpload and 1 submit button. Every time that the user selects one category, the subcategories are loaded (AutoPostBack=”true”). The primary user flow works fine: User selects one category, subcategory and selects a file to be uploaded (submitted). HOWEVER, if the user selects a file first, and then selects a category, the screen will do a partial refresh and the selected file will disappear (field goes blank). As a result, the user needs to select the file again. This causes an issue because the fact that the file is no longer selected can easily be overlooked. Seems straighforward --- but causing us a lot of grief. Any experts out there that can help? BIG thanks!

    Read the article

  • How to get proper alignment when printing to file

    - by user1067334
    I have this Structure the elements of which that I need to write in a text file struct Stage3ADisplay { int nSlot; char *Item; char *Type; int nIndex; unsigned char attributesMD[17]; //the last character is \0 unsigned char contentsMD[17]; //only for regular files - //the last character is \0 }; buffer = malloc(sizeof(Stage3ADisplayVar[nIterator]->nSlot) + sizeof(Stage3ADisplayVar[nIterator]->Item) + sizeof(Stage3ADisplayVar[nIterator]->Type) + sizeof(Stage3ADisplayVar[nIterator]->nIndex) + sizeof(Stage3ADisplayVar[nIterator]->attributesMD) + sizeof(Stage3ADisplayVar[nIterator]->contentsMD) + 1); sprintf (buffer,"%d %s %s %d %x %x",Stage3ADisplayVar[nIterator]->nSlot, Stage3ADisplayVar[nIterator]->Item,Stage3ADisplayVar[nIterator]->Type,Stage3ADisplayVar[nIterator]->nIndex,Stage3ADisplayVar[nIterator]->attributesMD,Stage3ADisplayVar[nIterator]->contentsMD); How do I make sure the rows in the file are properly aligned. Thank you.

    Read the article

  • C read part of file into cache

    - by Pete Jodo
    I have to do a program (for Linux) where there's an extremely large index file and I have to search and interpret the data from the file. Now the catch is, I'm only allowed to have x-bytes of the file cached at any time (determined by argument) so I have to remove certain data from the cache if it's not what I'm looking for. If my understanding is correct, fopen (r) doesn't put anything in the cache, only when I call getc or fread(specifying size) does it get cached. So my question is, lets say I use fread and read 100 bytes but after checking it, only 20 of the 100 bytes contains the data I need; how would I remove the useless 80 bytes from cache (or overwrite it) in order to read more from the file.

    Read the article

  • What kind of storage do people actually use for VMware ESX servers?

    - by Dirk Paessler
    VMware and many network evangelists try to tell you that sophisticated (=expensive) fiber SANs are the "only" storage option for VMware ESX and ESXi servers. Well, yes, of course. Using a SAN is fast, reliable and makes vMotion possible. Great. But: Can all ESX/ESXi users really afford SANs? My theory is that less than 20% of all VMware ESX installations on this planet actually use fiber or iSCS SANs. Most of these installation will be in larger companies who can afford this. I would predict that most VMware installations use "attached storage" (vmdks are stored on disks inside the server). Most of them run in SMEs and there are so many of them! We run two ESX 3.5 servers with attached storage and two ESX 4 servers with an iSCS san. And the "real live difference" between both is barely notable :-) Do you know of any official statistics for this question? What do you use as your storage medium?

    Read the article

  • Does cloud storage replicate the data over many datacenters if so it means i benefit content delive

    - by Berkay
    Let's assume that i want to use cloud storage service from one of the cloud storage provider, i got X gb structured and unstructured data and i will use this data as my contents of my interactive web page. And now i have some doubts about this point.I have many users and they are all visiting my web page from various countries.To be more specific first; does my data stored only of the Cloud Storage data center ? or Does my data replicated over many data centers of my provider? second if so; how can i benefit from content delivery network? (matching and placing users’ content nearest storage data centers)

    Read the article

  • How do I replace a harddrive that is in a two-way mirror storage space on Windows 8?

    - by Jon
    I have a storage space in Windows 8 doing a two-way mirror on three harddrives. The sizes are 297GB, 189GB, and 70GB. I would like to replace the 70GB HD with a larger one. My thought was to remove that drive from the space via the Storage Space control panel, shutdown, replace HD with bigger drive, reboot, add new HD to the storage space. I can't find any options to remove a HD from a storage space in the control panel. Should I just shutdown and swap out the small drive or is there another process for safely replacing the old HD? (By the way, the old HD is still operational.)

    Read the article

  • split large text file without missing data record

    - by Santosh
    I have a 140 MB text file, which contains detail information of books in library. For each book details there is a standard format data details in text file. I need to parse it and insert the data in Database. Here, parsing text file is not an issue. I am facing problem in parsing this large file. So i decided to split the file in small file around 2 MB each file. But i can't manually split this large file in so many pieces. I got HJsplit tool, which split the file but this also doesn't helped as this split the file but 1 book details half part is in one file and rest part is in second file. so if i split this way then information will be missed. How to split the large so that i cant miss the information ? Is there any tool which help me in this condition.

    Read the article

  • Any cloud storage service that lets us to authenticate the file when we serve the file to our visito

    - by TORr0t
    Lets say, i want to restrict a file to my visitors. I mean , i have an xx.avi file to be streamed/downloaded, and the visitor paid me for the bandwidth and the size of the file. In amazon s3, i cant control the file at all .(there is a very basic control thing which is not ok for me) Only way is my server can proxy the file, like it fetches the file from amazon s3 storagenode and send it to the owner with authentication approval by a php script. But this way i would double up the bandwidth usage and again there would be latency problem since my server needs to get the file from amazon s3. So i was wondering if there is a better solution or any cloud storage service that lets us to control the file restriction to my visitors. Thanks

    Read the article

  • Whats a good structure to save and retrieve locations of images?

    - by Goot
    I got a java-ee application, where I collect informations about movies. Im my backend I provide data like the name, description, genre and a random uuid. I also got lots of related files, which are stored on a file server. Including some screenshots, the dvd or bluRay cover and video trailers. My current approach is: When saving the files to the fileserver, I retrieve the movies random uuid (which is the primary key btw.). I then rename the files screenshot_[UUID]_1, screenshot_[UUID]_2 ... etc. Now, there are lots of other ways to handle this, like saving all filenames in a database or creating a dir structure on the fileserver for every uuid and, e.g., return all images in the "[uuid]/screenshots" folder via REST. I expect about 30k requests a day, so the service has to be pretty performant. Whats the best way to solve this?

    Read the article

  • Azure storage - double decimal point ignored on save

    - by Fabio Milheiro
    I have a value that is correctly stored in a property of an object, but when I save the changes to the Azure storage database, the double value is stored to the database ignoring the point (7.1000000003 is saved as 711). Also, the property is changed to 711.0. How do I solve this problem? The field is already set to double in the class and the database table.

    Read the article

  • How does a portable Thread Specific Storage Mechanism's Naming Scheme Generate Thread Relative Uniqu

    - by Hassan Syed
    A portable thread specific storage reference/identity mechanism, of which boost/thread/tss.hpp is an instance, needs a way to generate a unique keys for itself. This key is unique in the scope of a thread, and is subsequently used to retrieve the object it references. This mechanism is used in code written in a thread neutral manner. Since boost is a portable example of this concept, how specifically does such a mechanism work ?

    Read the article

  • Optimal xml storage engine

    - by nixau
    I'm considering optimal open source solution for storing xml documents with further querying on them effectively. Amount of data will be small. As far as I understand native xml databases might form a proper solution for my case. They obviously store xml documents in highly efficient way. It would be great to learn your experience. Any suggestions on proper solution? Have you got any experience employing xml storage engines in your apps?

    Read the article

  • Image upload storage strategies

    - by MatW
    When a user uploads an image to my site, the image goes through this process; user uploads pic store pic metadata in db, giving the image a unique id async image processing (thumbnail creation, cropping, etc) all images are stored in the same uploads folder So far the site is pretty small, and there are only ~200,000 images in the uploads directory. I realise I'm nowhere near the physical limit of files within a directory, but this approach clearly won't scale, so I was wondering if anyone had any advice on upload / storage strategies for handling large volumes of image uploads.

    Read the article

  • Simplest Azure Storage Manipulation possible

    - by Hurricanepkt
    I have the need to integrate some blob storage into an existing ASP.NET Mvc site my hope is to be able to just add some references and then just do puts and gets but I cannot find any simple example for how to do this (that hasn't been depricated to the point it no longer works) I have tried using StorageClient but CreateCloudBlobClient() doesn't seem to work.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >