Search Results

Search found 10417 results on 417 pages for 'large'.

Page 39/417 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How large is a "buffer" in PostgreSQL

    - by Konrad Garus
    I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query: SELECT c.relname, count(*) AS buffers FROM pg_buffercache b INNER JOIN pg_class c ON b.relfilenode = c.relfilenode AND b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database())) GROUP BY c.relname ORDER BY 2 DESC LIMIT 10; I discover that sample_table is using 120 buffers. How much is 120 buffers in bytes?

    Read the article

  • Large File Download - Connection With Server Reset

    - by daveywc
    I have an asp.net website that allows the user to download largish files - 30mb to about 60mb. Sometimes the download works fine but often it fails at some varying point before the download finishes with the message saying that the connection with the server was reset. Originally I was simply using Server.TransmitFile but after reading up a bit I am now using the code posted below. I am also setting the Server.ScriptTimeout value to 3600 in the Page_Init event. private void DownloadFile(string fname, bool forceDownload) { string path = MapPath(fname); string name = Path.GetFileName(path); string ext = Path.GetExtension(path); string type = ""; // set known types based on file extension if (ext != null) { switch (ext.ToLower()) { case ".mp3": type = "audio/mpeg"; break; case ".htm": case ".html": type = "text/HTML"; break; case ".txt": type = "text/plain"; break; case ".doc": case ".rtf": type = "Application/msword"; break; } } if (forceDownload) { Response.AppendHeader("content-disposition", "attachment; filename=" + name.Replace(" ", "_")); } if (type != "") { Response.ContentType = type; } else { Response.ContentType = "application/x-msdownload"; } System.IO.Stream iStream = null; // Buffer to read 10K bytes in chunk: byte[] buffer = new Byte[10000]; // Length of the file: int length; // Total bytes to read: long dataToRead; try { // Open the file. iStream = new System.IO.FileStream(path, System.IO.FileMode.Open, System.IO.FileAccess.Read, System.IO.FileShare.Read); // Total bytes to read: dataToRead = iStream.Length; //Response.ContentType = "application/octet-stream"; //Response.AddHeader("Content-Disposition", "attachment; filename=" + filename); // Read the bytes. while (dataToRead > 0) { // Verify that the client is connected. if (Response.IsClientConnected) { // Read the data in buffer. length = iStream.Read(buffer, 0, 10000); // Write the data to the current output stream. Response.OutputStream.Write(buffer, 0, length); // Flush the data to the HTML output. Response.Flush(); buffer = new Byte[10000]; dataToRead = dataToRead - length; } else { //prevent infinite loop if user disconnects dataToRead = -1; } } } catch (Exception ex) { // Trap the error, if any. Response.Write("Error : " + ex.Message); } finally { if (iStream != null) { //Close the file. iStream.Close(); } Response.Close(); } }

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Compute weighted averages for large numbers

    - by Travis
    I'm trying to get the weighted average of a few numbers. Basically I have: Price - 134.42 Quantity - 15236545 There can be as few as one or two or as many as fifty or sixty pairs of prices and quantities. I need to figure out the weighted average of the price. Basically, the weighted average should give very little weight to pairs like Price - 100000000.00 Quantity - 3 and more to the pair above. The formula I currently have is: ((price)(quantity) + (price)(quantity) + ...)/totalQuantity So far I have this done: double optimalPrice = 0; int totalQuantity = 0; double rolling = 0; System.out.println(rolling); Iterator it = orders.entrySet().iterator(); while(it.hasNext()) { System.out.println("inside"); Map.Entry order = (Map.Entry)it.next(); double price = (Double)order.getKey(); int quantity = (Integer)order.getValue(); System.out.println(price + " " + quantity); rolling += price * quantity; totalQuantity += quantity; System.out.println(rolling); } System.out.println(rolling); return totalQuantity / rolling; The problem is I very quickly max out the "rolling" variable. How can I actually get my weighted average? Thanks!

    Read the article

  • C# System.Diagnostics.Process redirecting Standard Out for large amounts of data

    - by Matt
    I running an exe from a .NET app and trying to redirect standard out to a streamreader. The problem is that when I do myprocess.exe out.txt out.txt is close to 14mb. When I do the command line version it is very fast but when I run the process from my csharp app it is excruciatingly slow because I believe the default streamreader flushes every 4096 bytes. Is there a way to change the default stream reader for the Process object?

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • How to use R-Tree for plotting large number of map markers on google maps

    - by Eeyore
    After searching SO and multiple articles I haven't found a solution to my problem. What I am trying to achieve is to load 20,000 markers on Google Maps. R-Tree seems like a good approach but it's only helpful when searching for points within the visible part of the map. When the map is zoomed out it will return all of the points and...crash the browser. There is also the problem with dragging the map and at the end of dragging re-running the query. I would like to know how I can use R-Tree and be able to achieve the all of the above.

    Read the article

  • Best practice to structure large html-based project

    - by AntonAL
    I develop Rails based website, enjoying using partials for some common "components" Recently, i faced a problem, that states with CSS interference. Styles for one component (described in css) override styles for another components. For example, one component has ... <ul class="items"> ... and another component has it too. But that ul's has different meaning in these two components. On the other hand, i want to "inherit" some styles for one component from another. For example: Let, we have one component, called "post" <div class="post"> <!-- post's stuff --> <ul class="items"> ... </ul> </div And another component, called "new-post": <div class="new-post"> <!-- post's stuff --> <ul class="items"> ... </ul> <!-- new-post's stuff --> <div class="tools">...</div> </div Post and new-post have something similar ("post's stuff") and i want to make CSS rules to handle both "post" and "new-post" New post has "subcomponents", for example - editing tools, that has also: <ul class="items"> This is where CSS rules starting to interfer - some rules, targeted for ul.items (in post and new-post) applies subcomponent of new-post, called "tools" On the one hand - i want to inherit some styles On the other hand, i want to get better incapsulation What are the best practices, to avoid such kind of problems ?

    Read the article

  • Linear Recurrence for very large n

    - by Android Decoded
    I was trying to solve this problem on SPOJ (http://www.spoj.pl/problems/REC/) F(n) = a*F(n-1) + b where we have to find F(n) Mod (m) where 0 <= a, b, n <= 10^100 1 <= M <= 100000 I am trying to solve it with BigInteger in JAVA but if I run a loop from 0 to n its getting TLE. How could I solve this problem? Can anyone give some hint? Don't post the solution. I want hint on how to solve it efficiently.

    Read the article

  • Flex - weird display behavior on large number of Canvas

    - by itarato
    Hi, I have a Flex app (SDK 3.5 - FP10) that does mindmap trees. Every node is a Canvas (I'm using Canvas specific properties so I needed it). It has a shadow effect, background color and some small ui element on it (like icons, texts...). It works perfectly until it goes over ~700 nodes (Canvas). Over that number it shows grey rectangles: http://yfrog.com/bhw2pj . If I turn off the DropShadowFilter effect for the Canvas, they are also gone, so I assume it's a DropShadowFilter problem: http://yfrog.com/2d9y8j . The effect is simple: private static var _nodeDropShadow:DropShadowFilter = new DropShadowFilter(1, 45, 0x888888, 1, 1, 1); _backgroundComp.filters = _nodeDropShadow; Is it possible that Flex can't handle that much? Thanks in advance

    Read the article

  • Does Python work in larger teams?

    - by Kugel
    I read this post last night and it got me thinking. I like python and "batteries", pypi and such. But I've only done python solo. Never tried it in a team. Are the points that Ted mentions valid? If they are how do teams cope with them? Does Python work in teams or even large teams? Or it kills productivity? I personally see the problems he mentions when I come back to my old code. Even when working with other modules sometimes I need to peek inside. I would like to hear people with experience on this.

    Read the article

  • Indexing large DB's with Lucene/PHP

    - by thebluefox
    Afternoon chaps, Trying to index a 1.7million row table with the Zend port of Lucene. On small tests of a few thousand rows its worked perfectly, but as soon as I try and up the rows to a few tens of thousands, it times out. Obviously, I could increase the time php allows the script to run, but seeing as 360 seconds gets me ~10,000 rows, I'd hate to think how many seconds it'd take to do 1.7million. I've also tried making the script run a few thousand, refresh, and then run the next few thousand, but doing this clears the index each time. Any ideas guys? Thanks :)

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

  • Large tables of static data with DBGhost

    - by Paulo Manuel Santos
    We are thinking of restructuring our database development and deployment processes by using DBGhost, we want to move away from the central development database and bring the database to the source control. One of the problems we have is a big table with static data (containing translated language strings), it has close to 200K rows. I know that our best solution is to move these stings into resource files, but until we implement that, will DbGhost be able to maintain all this static data and generate our development and deployment databases in a short time? And if not is there a good alternative to filling up this table whenever we need to?

    Read the article

  • Objective C code to handle large amount of data processing in iPhone

    - by user167662
    I had the following code that takes in 14 mb or more of image data encoded in base4 string and converts them to jpeg before writing to a file in iphone. It crashes my program giving the following error : Program received signal: “0”. warning: check_safe_call: could not restore current frame I tweak my program and it can process a few more images before the error appear again. My coding is as follows: // parameters is an array where the fourth element contains a list of images in base64 >encoded string NSMutableArray *imageStrList = (NSMutableArray*) [parameters objectAtIndex:5]; while (imageStrList.count != 0) { NSString *imgString = [imageStrList objectAtIndex:0]; // Create a file name using my own Utility class NSString *fileName = [Utility generateFileNName]; NSData *restoredImg = [NSData decodeWebSafeBase64ForString:imgString]; UIImage *img = [UIImage imageWithData: restoredImg]; NSData *imgJPEG = UIImageJPEGRepresentation(img, 0.4f); [imgJPEG writeToFile:fileName atomically:YES]; [imageStrList removeObjectAtIndex:0]; } I tried playing around with UIImageJPEGRepresentation and found out that the lower the value, the more image it can processed but this should not be the way. I am wondering if there is anyway to free up memory of the imageStrList immediately after processing each image so that it can be used by the next one in the line.

    Read the article

  • Migrating from Physical SQL (SQL2000) To VMWare machine (SQL2008) - Transferring Large DB

    - by alex
    We're in the middle of migrating from a windows & SQL 2000 box to a Virtualised Win & SQL 2k8 box The VMWare box is on a different site, with better hardware, connectivity etc... The old(current) physical machine is still in constant use - I've taken a backup of the DB on this machine, which is 21GB Transfering this to our virtual machine took around 7+ hours - which isn't ideal when we do the "actual" switchover. My question is - How should I handle the migration better? Could i set up our current machine to do log shipping to the VM machine to keep up to date? then, schedule down time out of hours to do the switch over? Is there a better way?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • Python 3.1 - Memory Error during sampling of a large list

    - by jimy
    The input list can be more than 1 million numbers. When I run the following code with smaller 'repeats', its fine; def sample(x): length = 1000000 new_array = random.sample((list(x)),length) return (new_array) def repeat_sample(x): i = 0 repeats = 100 list_of_samples = [] for i in range(repeats): list_of_samples.append(sample(x)) return(list_of_samples) repeat_sample(large_array) However, using high repeats such as the 100 above, results in MemoryError. Traceback is as follows; Traceback (most recent call last): File "C:\Python31\rnd.py", line 221, in <module> STORED_REPEAT_SAMPLE = repeat_sample(STORED_ARRAY) File "C:\Python31\rnd.py", line 129, in repeat_sample list_of_samples.append(sample(x)) File "C:\Python31\rnd.py", line 121, in sample new_array = random.sample((list(x)),length) File "C:\Python31\lib\random.py", line 309, in sample result = [None] * k MemoryError I am assuming I'm running out of memory. I do not know how to get around this problem. Thank you for your time!

    Read the article

  • TSQL, select values from large many-to-many relationship

    - by eugeneK
    I have two tables Publishers and Campaigns, both have similar many-to-many relationships with Countries,Regions,Languages and Categories. more info Publisher2Categories has publisherID and categoryID which are foreign keys to publisherID in Publishers and categoryID in Categories which are identity columns. On other side i have Campaigns2Categories with campaignID and categoryID columns which are foreign keys to campaignID in Campaigns and categoryID in Categories which again are identities. Same goes for Regions, Languages and Countries relationships I pass to query certain publisherID and want to get campaignIDs of Campaigns that have at least one equal to Publisher value from regions, countries, language or categories thanks

    Read the article

  • Best practice to modularise a large Grails app?

    - by Mulone
    Hi all, A Grails app I'm working on is becoming pretty big, and it would be good to refactor it into several modules, so that we don't have to redeploy the whole thing every time. In your opinion, what is the best practice to split a Grails app in several modules? In particular I'd like to create a package of domain classes + relevant services and use it in the app as a module. Is this possible? Is it possible to do it with plugins? Cheers, Mulone

    Read the article

  • Database for managing large volumes of (system) metrics

    - by symcbean
    Hi, I'm looking at building a system for managing and reporting stats on web page performance. I'll be collecting a lot more stats than are available in the standard log formats (approx 20 metrics) but compared to most types of database applications, the base data structure will be very simple. My problem is that I'll be accumulating a lot of data - in the region of 100,000 records (i.e. sets of metrics) per hour. Of course, resources are very limited! So that its possible to sensibly interact with the data, I'd need to consolidate each metric into one minute bins, broken down by URL, then for anything more than 1 day old, consolidated into 10 minute bins, then at 1 week, hourly bins. At the front end, I want to provide a view (prefereably as plots) of the last hour of data, with the facility for users to drill up/down through defined hierarchies of URLs (which do not always map directly to the hierarchy expressed in the path of the URL) and to view different time frames. Rather than coding all this myself and using a relational database, I was wondering if there were tools available which would facilitate both the management of the data and the reporting. I had a look at Mondrian however I can't see from the documentation I've looked at whether it's possible to drop the more granular information while maintaining the consolidated views of the data. RRDTool looks promising in terms of managing the data consolidation, but seems to be rather limited in terms of querying the dataset as a multi-dimensional/relational database. What else whould I be looking at?

    Read the article

  • Robust Large File Transfer with WCF

    - by Sharov
    I want to transfer big files (1GB) over unreliable transport channels. When connection is interrupted, I don't want start file transfering from the begining. I can partially store it in a temp table and store last readed position, so when connection is reestablished I can request continue uploading of file from this position. Is there any best-practice for such kind of things. I'm currently use chunking channel.

    Read the article

  • inserting large number of dates

    - by Radhe
    How can I insert all dates in an year(or more) in a table using sql My dates table has following structure dates(date1 date); Suppose I want to insert dates between "2009-01-01" to "2010-12-31" inclusive. Is there any sql query for the above?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >