Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 131/2640 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • How is 'processing credit card data' defined (PCI)?

    - by Chris
    If i have a web application and i receive credit card data transmitted via a POST request by a web browser over HTTPS and instantly open a socket (SSL) to a remote PCI compilant card processor to forward the data and wait for a response, am i allowed to do that? or is this receiving the data with my application and forwarding it already subject of "processing credit card data"? if i create an iframe that is displayed in a client browser to enter cc data and this iframe posts the data via HTTPS to remote card processor (directly!) is this already a case of processing credit card data? even if my application code 'doesnt touch' the entered data with any event handlers? i'm interested in the definition "credit card data processing". when does it start to be a cc data processing application? can somebody maybe point me to that section in PCI-DSS standard that clearly defines when you start to 'be a processing application'? Thanks,

    Read the article

  • remove duplicates from object array data java

    - by zahir hussain
    hi i want to know how to remove duplicates in object. for example cat c[] = new cat[10]; c[1].data = "ji"; c[2].data = "pi"; c[3].data = "ji"; c[4].data = "lp"; c[5].data = "ji"; c[6].data = "pi"; c[7].data = "jis"; c[8].data = "lp"; c[9].data = "js"; c[10].data = "psi"; i would like to remove the duplicates value from object array. thanks and advance

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • Best Zend Framework architecture for large reporting site?

    - by Andy
    I have a site of about 60 tabular report pages. Want to convert this to Zend. The report has two states: empty report and filled in with data report. Each report has its own set of input boxes and select drop downs to narrow down searches. You click on submit and it retrieves the data. Thats all each page does. Do I create 60 controllers with each one with default index action and getData action? All I have read online do not really describe how to architect a real site.

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • List of fundamental data structures - what am I missing?

    - by jboxer
    I've been studying my fundamental data structures a bunch recently, trying to make sure I've got them down cold. By "fundamental", I mean the real basic ones. Fancy ones like Red-Black Trees and Bloom Filters are clearly worth knowing, but they're usually either enhancements of fundamental ones (Red-Black Trees are binary search trees with special properties to keep them balanced) or they're only useful in very specific situations (Bloom Filters). So far, I'm "fluent" in the following data structures: Arrays Linked Lists Stacks/Queues Binary Search Trees Heaps/Priority Queues Hash Tables However, I feel like I'm missing something. Are there any fundamental ones that I'm forgetting about? EDIT: Added these after posting the question Strings (suggested by catchmeifyoutry) Sets (suggested by Peter) Graphs (suggested by Nick D and aJ) B-Trees (Suggested by tloach) I'm a little on-the-fence about whether these are too fancy or not, but I think they're different enough from the fundamental structures (and important enough) to be worth studying as fundamental.

    Read the article

  • Faster forking of large processes on Linux ?

    - by timday
    What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ? My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application. Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ? I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

    Read the article

  • How do you filter an NSMutable Array that contains core data?

    - by James
    I have an array that it is populated by core data as follows. NSMutableArray *mutableFetchResults = [CoreDataHelper getObjectsFromContext:@"Spot" :@"Name" :YES :managedObjectContext]; It looks like this in the console. (entity: Spot; id: 0x4b7e580 ; data: { CityToProvince = 0x4b7dbd0 ; Description = "Friend"; Email = "[email protected]"; Age = 21; Name = "Adam"; Phone = "+44175240"; }), How can i filter the array to remove anyone who is over a certain age? or use values in the array to make calculations? Please help i have been stuck for ages on this. Code would be gratefully appreciated.

    Read the article

  • Write large PDFs with Java sequentially

    - by Benjamin Muschko
    I am looking for a Java library that let's you write large PDFs sequentially with a minimum amount of memory. Most of the libraries I had a look at has to build up the document in memory first before you can actually write it. The problem I have to deal with are OutOfMemoryErrors. It would be great if I could flush the writer programmatically whenever needed e.g. for each page. Does anyone have any recommendations? I need something with a license along the lines of the LGPL (so not the GPL or the Affero GPL that iText uses).

    Read the article

  • Selecting an item from a very large list

    - by Ben Fulton
    Suppose I have a list of a couple of thousand organizations and a user needs to be able to select one of them. The list is too large to populate in a dropdown at page load, and the user often knows what they want but it's not the first part of the organization name. That is, they know "Collections" but not that the precise name of the organization is "Department of Collections". So the user will need/want to type in some information. It's easy enough to use an autocompleting textbox of some kind, but I don't want to allow the user to type in random text - they have to choose one of the organizations explicitly. What's the best solution?

    Read the article

  • How to manupilate data in VIew using Asp.Net Mvc RC 2?

    - by Picflight
    I have a table [Users] with the following columns: INT SmallDateTime Bit Bit [UserId], [BirthDate], [Gender], [Active] Gender and Active are Bit that hold either 0 or 1. I am displaying this data in a table on my View. For the Gender I want to display 'Male' or 'Female', how and where do I manipulate the 1's and 0's? Is it done in the repository where I fetch the data or in the View? For the Active column I want to show a checkBox that will AutoPostBack on selection change and update the Active filed in the Database. How is this done without Ajax or jQuery?

    Read the article

  • How do I protect Dynamic data pages using ASP.NET Authentication?

    - by ProfK
    I have a site where most of my pages are arranged in business area folders, e.g. Activations, Outdoors, Branding. Each folder has a small web.config that protects the contents against access by people without a role for that business area. However, basic admin for most business areas is done via Dynamic Data pages. These are only basically protected by not appearing in the menu unless the user has the correct role, but they are still accessible directly via URL, because of the {table}/{Action} routing used by Dynamic Data. What can I do to protect these pages against direct access?

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • How do you pronounce large hex numbers?

    - by warrenm
    This question might be subjective, but I'm hoping there's some consensus that I just don't know about. Short hex numbers are relatively easy to spell out (e.g., 0xC4A might be "cee-four-ay"). Hex numbers ending with a multiple of three zeros are likewise pretty easy (e.g., 0xC000 might be "cee-thousand"). But is there a concise way to pronounce 0xFFFF0000 or 0xCA000000? Magic numbers like 0xDEADBEEF are popular for their pronounceability, but I'm mostly asking about large-ish, round numbers that seem like they should have a more concise pronunciation.

    Read the article

  • Problem with large Canvas in Silverlight

    - by Fury
    Hi, I am developing (using Silvelight 3) an aplication that creates some kind of timeline and places objects on it. For this purpose I need a really large Canvas (up to 2000000 pixels width) with long lines on it, but whenever I create Canvas even 40000 pixels width it behaves very strangely, randomly disappearing. I have found a post with the description of the exactly same problem on silverlight forums and another one here on the stackoverflow. It seems that is a known problem since silverlight 2, but I can't find any good workaround. Does anybody know such workaround or can check is it still an issue in Silverlight 4? Thanks in advance.

    Read the article

  • How do I include extremely long literals in C++ source?

    - by BillyONeal
    Hello everyone :) I've got a bit of a problem. Essentially, I need to store a large list of whitelisted entries inside my program, and I'd like to include such a list directly -- I don't want to have to distribute other libraries and such, and I don't want to embed the strings into a Win32 resource, for a bunch of reasons I don't want to go into right now. I simply included my big whitelist in my .cpp file, and was presented with this error: 1>ServicesWhitelist.cpp(2807): fatal error C1091: compiler limit: string exceeds 65535 bytes in length The string itself is about twice this allowed limit by VC++. What's the best way to include such a large literal in a program?

    Read the article

  • Data Access Layer in ASP.Net: Where do I create the Connection?

    - by atticae
    If I want to create a 3-Layer ASP.Net application (Presentation Layer, Business Layer, Data Access Layer), where is the best place to create the Connection objects? So far I used a helper class in my Presentation Layer to create an IDbCommand from the ConnectionString in the web.config on each page and passed it on to the DAL classes/methods. Now I am not so sure, if this part shouldn't also be included in the DAL somehow, because it obviously is part of the Data Access. The DAL is in a separately compilated project, so I dont have access to the web.config and cannot access the connection string (right?). What is the best practice here?

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >