Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 5/145 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Python templates for huge HTML/XML

    - by newtover
    Hello, Recently I needed to generate a huge HTML page containing a report with several thousand row table. And, obviously, I did not want to build the whole HTML (or the underlying tree) in memory. As result, I built the page with the old good string interpolation, but I do not like the solution. Thus, I wonder whether there are Python templating engines that can yield resulting page content by parts.

    Read the article

  • Load huge sql dump to mySql

    - by jonathan
    Hi I have a huge mysql dump file generated from phpmyAdmin (150 000 lines of create table, inserts....) The mySql query tool fails to open such a big file. As a result i can't insert all the records. Is there a way to do this ? Thanks John

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • Saving a huge dataset into SQL table in a XML column - MS SQL, C#.Net

    - by NLV
    Hello all I'm having a requirement where i've to save a dataset which has multiple tables in it in a sql table XML column by converting into an XML. The problem is that in some cases the dataset grows extremely huge and i get OutOfMemoryException. What i basically do is that i convert the dataset into an xml file and save it in the local disk and then load the XML again and send it as a stored procedure parameter. But when i write the dataset xml into the disk the file sizes over 700 Mb and when i load it in the XMLDocument object in memory i get OutOfMemoryException. How can get the dataset xml without saving it in a file and re-reading it again? Thank You NLV

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

  • C# huge size 2-dim arrays

    - by 4eburek
    I need to declare square matrices in C# WinForms with more than 20000 items in a row. I read about 2GB .Net object size limit in 32bit and also the same case in 64bit OS. So as I understood the single answer - is using unsafe code or separate library built withing C++ compiler. The problem for me is worth because ushort[20000,20000] is smaller then 2GB but actually I cannot allocate even 700MB of memory. My limit is 650MB and I don't understand why - I have 32bit WinXP with 3GB of memory. I tried to use Marshal.AllocHGlobal(700<<20) but it throws OutOfMemoryException, GC.GetTotalMemory returns 4.5MB before trying to allocate memory. I found only that many people say use unsafe code but I cannot find example of how to declare 2-dim array in heap (any stack can't keep so huge amount of data) and how to work with it using pointers. Is it pure C++ code inside of unsafe{} brackets? Could you please provide a small example of working with matrices using pointers in unsafe code.

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

  • ASP.NET and SQL server with huge data sets

    - by Jake Petroules
    I am developing a web application in ASP.NET and on one page I am using a ListView with paging. As a test I populated the table it draws from with 6 million rows. The table and a schema-bound view based off it have all the necessary indexes and executing the query in SQL Server Management Studio with SELECT TOP 5 returned in < 1 second as expected. But on the ASP.NET page, with the same query, it seems to be selecting all 6 million rows without any limit. Shouldn't the paging control limit the query to return only N rows rather than the entire data set? How can I use these ASP.NET controls to handle huge data sets with millions of records? Does SELECT [columns] FROM [tablename] quite literally mean that for the ListView, and it doesn't actually inject a TOP <n> and does all the pagination at the application level rather than the database level?

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • Which can handle a huge surge of queries: SQL Server 2008 Fulltext or Lucene

    - by Luke101
    I am creating a widget that will be installed on several websites and blogs. The widget will analyse the remote webpage title and content, then it will return relevent articles/links on my website. The amount of traffic we expect will be very huge roughly 500K queries a day and up from there. I need the queries to be returned very quickly, so I need the candidate to be high performance, similar to google adsense. The remote title can be from 5 to 50 words and the description we will use no more then 3000 words. Which of these two do you think can handle the load.

    Read the article

  • python fdb save huge data from database to file

    - by peter
    I have this script SELECT = """ select coalesce (p.ID,'') as id, coalesce (p.name,'') as name, from TABLE as p """ self.cur.execute(SELECT) for row in self.cur.itermap(): xml +=" <item>\n" xml +=" <id>" + id + "</id>\n" xml +=" <name>" + name + "</name>\n" xml +=" </item>\n\n" #save xml to file here f = open... and I need to save data from huge database to file. There are 10 000s (up to 40000) of items in my database and it takes very long time when script runs (1 hour and more) until finish. How can I take data I need from database and save it to file "at once"? (as quick as possible? I don't need xml output because I can process data from output on my server later. I just need to do it as quickly as possible. Any idea?) Many thanks!

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • Books and resources for Java Performance tuning - when working with databases, huge lists

    - by Arvind
    Hi All, I am relatively new to working on huge applications in Java. I am working on a Java web service which is pretty heavily used by various clients. The service basically queries the database (hibernate) and then works with a lot of Lists (there are adapters to convert list returned from DB to the interface which the service publishes) and I am seeing lot of issues with the service like high CPU usage or high heap space. While I can troubleshoot the performance issues using a profiler, I want to actually learn about what all I need to take care when I actually write code. Like what kind of List to use or things like using StringBuilder instead of String, etc... Is there any book or blogs which I can refer which will help me while I write new services? Also my application is multithreaded - each service call from a client is a new thread, and I want to know some best practices around that area as well. I did search the web but I found many tips which are not relevant in the latest Java 6 releases, so wanted to know what kind of resources would help a developer starting out now on Java for heavily used applications. Arvind

    Read the article

  • jsf and huge web site

    - by darko petreski
    Hi All, I need to program a very huge web site. The site should contain public part with search engine, user registration forms, cart and all other stuff. The private part should contain administrative web application with a lot of data grids, complex edit forms, security checks, services and so on. Also I need very good support for html mailing and pdf exports. Web services are also required. I am considering to use jsf. Can you recommend me books or other stuff where I can find all this info for building what i need? Is seam good option for this? I have looked their site and page seam on production. The sites listed there are far from professional and enterprise. Is there any book that explains how to build complete web site with jsf technology from login, security, sending mails, pdf conversion, time dependent background processes and everything else that big sites needs. I do not have time reading 20 books. In every php book all this stuff is explained but I have not seen a single java book that at least from high view will explain all of this. Regards

    Read the article

  • Collision detection of huge number of circles

    - by Tomek Tarczynski
    What is the best way to check collision of huge number of circles? It's very easy to detect collision between two circles, but if we check every combination then it is O(n^2) which definitely not an optimal solution. We can assume that circle object has following properties: -Coordinates -Radius -Velocity -Direction Velocity is constant, but direction can change. I've come up with two solutions, but maybe there are some better solutions. Solution 1 Divide whole space into overlapping squares and check for collision only with circles that are in the same square. Squares needs to overlap so there won't be a problem when circle moves from one square to another. Solution 2 At the beginning distances between every pair of circles need to be calculated. If the distance is small then these pair is stored in some list, and we need to check for collision in every update. If the distance is big then we store after which update there can be a collision (it can be calculated because we know the distance and velocitites). It needs to be stored in some kind of priority queue. After previously calculated number of updates distance needs to be checked again and then we do the same procedure - put it on the list or again in the priority queue.

    Read the article

  • finding ALL cycles in a huge sparse matrix

    - by Andy
    Hi there, First of all I'm quite a Java beginner, so I'm not sure if this is even possible! Basically I have a huge (3+million) data source of relational data (i.e. A is friends with B+C+D, B is friends with D+G+Z (but not A - i.e. unmutual) etc.) and I want to find every cycle within this (not necessarily connected) directed graph. I've found this thread (http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) which has pointed me to Donald Johnson's (elementary) cycle-finding algorithm which, superficially at least, looks like it'll do what I'm after (I'm going to try when I'm back at work on Tuesday - thought it wouldn't hurt to ask in the meanwhile!). I had a quick scan through the code of the Java implementation of Johnson's algorithm (in that thread) and it looks like a matrix of relations is the first step, so I guess my questions are: a) Is Java capable of handling a 3+million*3+million matrix? (was planning on representing A-friends-with-B by a binary sparse matrix) b) Do I need to find every connected subgraph as my first problem, or will cycle-finding algorithms handle disjoint data? c) Is this actually an appropriate solution for the problem? My understanding of "elementary" cycles is that in the graph below, rather than picking out A-B-C-D-E-F it'll pick out A-B-F, B-C-D etc. but that's not the end of the world given the task. E / \ D---F / \ / \ C---B---A d) If necessary, I can simplify the problem by enforcing mutuality in relations - i.e. A-friends-with-B <== B-friends-with-A, and if really necessary I can maybe cut down the data size, but realistically it is always going to be around the 1mil mark. z) Is this a P or NP task?! Am I biting off more than I can chew? Thanks all, any help appreciated! Andy

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • How to transform huge xml files in java?

    - by fx42
    As the title says it, I have a huge xml file (GBs) <root> <keep> <stuff> ... </stuff> <morestuff> ... </morestuff> </keep> <discard> <stuff> ... </stuff> <morestuff> ... </morestuff> </discard> </root> and I'd like to transform it into a much smaller one which retains only a few of the elements. My parser should do the following: 1. Parse through the file until a relevant element starts. 2. Copy the whole relevant element (with children) to the output file. go to 1. step 1 is easy with SAX and impossible for DOM-parsers. step 2 is annoying with SAX, but easy with the DOM-Parser or XSLT. so what? - is there a neat way to combine SAX and DOM-Parser to do the task?

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • Getting huge lags when downloading files via Service

    - by Copa
    I have a Service which receives URLs to download. The Service than downloads these URLs and save them to a file on the SD Card. When I put more than 2 items in the download queue my device is unusable. It nearly freezes. Huge lags and so on. Any idea? Sourcecode: private static boolean downloading = false; private static ArrayList<DownloadItem> downloads = new ArrayList<DownloadItem>(); /** * called once when the service started */ public static void start() { Thread thread = new Thread(new Runnable() { @Override public void run() { while (true) { if (downloads.size() > 0 && !downloading) { downloading = true; DownloadItem item = downloads.get(0); downloadSingleFile(item.getUrl(), item.getFile()); } } } }); thread.start(); } public static void addDownload(DownloadItem item) { downloads.add(item); } private static void downloadSuccessfullFinished() { if (downloads.size() > 0) downloads.get(0).setDownloaded(true); downloadFinished(); } private static void downloadFinished() { // remove the first entry; it has been downloaded if (downloads.size() > 0) downloads.remove(0); downloading = false; } private static void downloadSingleFile(String url, File output) { final int maxBufferSize = 4096; HttpResponse response = null; try { response = new DefaultHttpClient().execute(new HttpGet(url)); if (response != null && response.getStatusLine().getStatusCode() == 200) { // request is ok InputStream is = response.getEntity().getContent(); BufferedInputStream bis = new BufferedInputStream(is); RandomAccessFile raf = new RandomAccessFile(output, "rw"); ByteArrayBuffer baf = new ByteArrayBuffer(maxBufferSize); long current = 0; long i = 0; // read and write 4096 bytes each time while ((current = bis.read()) != -1) { baf.append((byte) current); if (++i == maxBufferSize) { raf.write(baf.toByteArray()); baf = new ByteArrayBuffer(maxBufferSize); i = 0; } } if (i > 0) // write the last bytes to the file raf.write(baf.toByteArray()); baf.clear(); raf.close(); bis.close(); is.close(); // download finished get start next download downloadSuccessfullFinished(); return; } } catch (Exception e) { // not successfully downloaded downloadFinished(); return; } // not successfully downloaded downloadFinished(); return; }

    Read the article

  • Editing a 9gb .sql file

    - by CERIQ
    Hi. I've got a "slightly" large sql script saved as a textfile. It totals in at 8.92gb, so it's a bit of a beast. I've got to do some search and replaces in this file(specifically, change all NOT NULL to NULL, so all fields are nullable) and then execute the darned thing. Does anyone have any suggestions for a text editor that would be capable of this? The other way that I can see to solve the problem is to write a program that reads a chunk, does a replace on the stuff I need, and then save it to a new file, but I'd rather use some standard way of doing this. It also does not solve the problem of opening the beast up in sql server management studio to execute the darned thing... Any ideas? Thanks, Eric

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >