Search Results

Search found 10417 results on 417 pages for 'large'.

Page 2/417 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • dictionary interface for large data sets

    - by Richard
    I have a set of key/values (all text) that is too large to load in memory at once. I would like to interact with this data via a Python dictionary-like interface. Does such a module already exist? Reading key values should be efficient and values compressed on disk to save space.

    Read the article

  • Fast ruby http library for large XML downloads

    - by Vlad Zloteanu
    I am consuming various XML-over-HTTP web services returning large XML files ( 2MB). What would be the fastest ruby http library to reduce the 'downloading' time? Required features: both GET and POST requests gzip/deflate downloads (Accept-Encoding: deflate, gzip) - very important I am thinking between: open-uri Net::HTTP curb but you can also come with other suggestions. P.S. To parse the response, I am using a pull parser from Nokogiri, so I don't need an integrated solution like rest-client or hpricot.

    Read the article

  • view Large PDF file in iPhone SDK

    - by aRrAY
    I have followed some tutorial that teach me how to view PDF file in UIWebView, however I found that if the file size is large, it will be lag when I zoom the PDF, but it doesn't occur in Mobile Safari. So anyone know how to solve it?

    Read the article

  • How to export large data to Excel

    - by mavera
    I have a criteria page in my asp.net application. When user clicks report button, firstly in a new page results are binded to a datagrid, then this page is exported to excel file with changing content type method. That normally works, but when large amount of data comes, system.outofmemoryexception is thrown. Does anyone know a way to fix this problem, or another usefull technic to do?

    Read the article

  • Shortening large CSV on debian

    - by Unkwntech
    I have a very large CSV file and I need to write an app that will parse it but using the 6GB file to test against is painful, is there a simple way to extract the first hundred or two lines without having to load the entire file into memory? The file resides on a Debian server.

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Viewing large XML files in eclipse?

    - by Paul Wicks
    I'm working on a project involving some large XML files (from 50MB to over 1GB) and it would be nice if I could view them in eclipse (simple text view is fine) without Java running out of heap space. I've tried tweaking the amount of memory available to the jvm in eclipse.ini but haven't had much success. Any ideas?

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • Managing large downloadable content on mobile devices

    - by larromba
    This is a general question of how to best manage large downloadable content on mobile devices. Lets consider a situation whereby a mobile app needs to download a number of very large content items, like HD videos, that are over 500MB but under 2GB. Now, lets assume this content delivery system should be scalable. Would it be a fair assumption that: A reputable cloud service would be needed - if so, what is a reliable and cost effective cloud service for mobile devices based on anyone's experience? Large content downloads should only be attempted over a wifi connection, so the end user doesn't incur large costs, e.g. when travelling. Downloads should carry on in the background if possible, as the user won't want to wait in an app for long periods. If the downloads don't finish, or the OS quits the app, all downloads should carry on when the app is next activated? Are there any other pitfalls anyone may have experienced when managing large content on mobile devices? Thanks.

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • IOException reading a large file from a UNC path into a byte array using .NET

    - by Matt
    I am using the following code to attempt to read a large file (280Mb) into a byte array from a UNC path public void ReadWholeArray(string fileName, byte[] data) { int offset = 0; int remaining = data.Length; log.Debug("ReadWholeArray"); FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read); while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; } } This is blowing up with the following error. System.IO.IOException: Insufficient system resources exist to complete the requested If I run this using a local path it works fine, in my test case the UNC path is actually pointing to the local box. Any thoughts what is going on here ?

    Read the article

  • Searching a large list of words in another large list

    - by Christian
    I have a list of 1,000,000 strings with a maximum length of 256 with protein names. Every string has an associated ID. I have another list of 4,000,000,000 strings with a maximum length of 256 with words out of articles and every word has an ID. I want to find all matches between the list of protein names and the list of words of the articles. Which algorithm should I use? Should I use some prebuild API? It would be good if the algorithm runs on a normal PC without special hardware.

    Read the article

  • Computing MD5SUM of large files in C#

    - by spkhaira
    I am using following code to compute MD5SUM of a file - byte[] b = System.IO.File.ReadAllBytes(file); string sum = BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(b)); This works fine normally, but if I encounter a large file (~1GB) - e.g. an iso image or a DVD VOB file - I get an Out of Memory exception. Though, I am able to compute the MD5SUM in cygwin for the same file in about 10secs. Please suggest how can I get this to work for big files in my program. Thanks

    Read the article

  • What is the fastest way to create a checksum for large files in C#

    - by crono
    Hi, I have to sync large files across some machines. The files can be up to 6GB in size. The sync will be done manually every few weeks. I cant take the filename into consideration because they can change anytime. My plan is to create checksums on the destination PC and on the source PC and than copy all files with a checksum, which are not already in the destination, to the destination. My first attempt was something like this: using System.IO; using System.Security.Cryptography; private static string GetChecksum(string file) { using (FileStream stream = File.OpenRead(file)) { SHA256Managed sha = new SHA256Managed(); byte[] checksum = sha.ComputeHash(stream); return BitConverter.ToString(checksum).Replace("-", String.Empty); } } The Problem was the runtime: - with SHA256 with a 1,6 GB File - 20 minutes - with MD5 with a 1,6 GB File - 6.15 minutes Is there a better - faster - way to get the checksum (maybe with a better hash function)?

    Read the article

  • Viewing large text file in a browser

    - by MeLight
    Hi, I need to write a text file viewer (not the directory tree, but the actual file contents) for use in a browser. It will be used to view large files. I want to give the user the ability to actually ummm, browse the file, ie prev page & next page buttons, while each page will show only a portion of the file. Two question: Is there anyway to pass the file descriptor through POST (or something) so that on each page I can keep reading from an already open file, and not starting all over again (again - huge files) Is there a way to read the file backwards? Will be very useful for browsing back in a file. Any other implementation ideas are very welcome. Thanks

    Read the article

  • How to send large objects using boost::asio

    - by Max
    Good day. I'm receiving a large objects via the net using boost::asio. And I have a code: for (int i = 1; i <= num_packets; i++) boost::asio::async_read(socket_, boost::asio::buffer(Obj + packet_size * (i - 1), packet_size), boost::bind(...)); Where My_Class * Obj. I'm in doubt if that approach possible (because i have a pointer to an object here)? Or how it would be better to receive this object using packets of fixed size in bytes? Thanks in advance.

    Read the article

  • ASP.NET MVC - separating large app

    - by marc_s
    I've been puzzled by what I consider a contradiction in terms: ASP.NET MVC claims to be furthering and supporting the "separation of concern" motto, which I find a great idea. However, it seems there's no way of separating out controllers, model or views into their own assembly, or separating areas into assemblies. With the fixed Controller, Model and View folders in your ASP.NET MVC, you're actually creating a huge hodge podge of things. Is that the separation of concerns, really?? Seems like quite the contrary to me. So what I'm wondering: how can I create an ASP.NET MVC solution that will either separate out controllers, the model, and the folders full of views, into separate assemblies? how can I put areas of ASP.NET MVC 2 into separate assemblies? or how else do you manage a large ASP.NET MVC app - which has several dozen or even over a hundred controllers, lots of model and viewmodel classes, and several hundred views?

    Read the article

  • What libraries are available for manipulating super large images in .Net

    - by tpower
    I have some really large files for example 320 MB tif file with 14000 X 9000 pixels. The operations I need to perform are basically scaling the images to get smaller versions of it and breaking the image into tiles. My code works fine with small files and I use the .Net Bitmap objects but I will occasionally get Out of Memory exceptions for larger files. I've tried using the FreeImage libraries FreeImageBitmap but have the same problems. I'm using something like the following to scale the image: using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage( source, xOffset, yOffset, source.Width * scale, source.Height * scale ); } Ideally I'd like a third party library to do all the hardwork, but if you have any tips or resources with more information I would appreciate it.

    Read the article

  • Overcoming C limitations for large projects

    - by Francisco Garcia
    One aspect where C shows its age is the encapsulation of code. Many modern languages has classes, namespaces, packages... a much more convenient to organize code than just a simple "include". Since C is still the main language for many huge projects. How do you to overcome its limitations? I suppose that one main factor should be lots of discipline. I would like to know what you do to handle large quantity of C code, which authors or books you can recommend.

    Read the article

  • Creating an Extremely Large Index in Oracle

    - by Rudiger
    Can someone look at the linked reference and explain to me the precise statements to run? Oracle DBA's Guide: Creating a Large Index Here's what I came up with... CREATE TEMPORARY TABLESPACE ts_tmp TEMPFILE 'E:\temp01.dbf' SIZE 10000M REUSE AUTOEXTEND ON EXTENT MANAGEMENT LOCAL; ALTER USER me TEMPORARY TABLESPACE ts_tmp; CREATE UNIQUE INDEX big_table_idx ON big_table ( record_id ); DROP TABLESPACE ts_tmp; Edit 1 After this index was created, I ran an explain plan for a simple query and get this error: ORA-00959: tablespace 'TS_TMP' does not exist It seems like it's not temporary at all... :(

    Read the article

  • hosting acounts for large uploads

    - by Phil Jackson
    Hi, im wondering if anyone knows of any host providers ( uk preferably ) that deals mostly with accepting large file uploads. Most hosts only let you push something like 1.5mb ( thats taking into account the connection and the max execution time ). What i am looking for is a host specificaly for storing files on. I was going to create an upload script onto my application which posted the file to the external host and then return back ( using headers so the user doen't even know they have left ). Does anyone know of a host for this?

    Read the article

  • [PHP] Using cURL to download large XML files

    - by ndg
    I'm working with PHP and need to parse a number of fairly large XML files (50-75MB uncompressed). The issue, however, is that these XML files are stored remotely and will need to be downloaded before I can parse them. Having thought about the issue, I think using a system() call in PHP in order to initiate a cURL transfer is probably the best way to avoid timeouts and PHP memory limits. Has anyone done anything like this before? Specifically, what should I pass to cURL to download the remote file and ensure it's saved to a local folder of my choice?

    Read the article

  • Higher speed options for executing very large (20 GB) .sql file in MySQL

    - by Jonogan
    My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion. Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size? Thanks

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >