Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 327/555 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

  • Rails CSV import, adding to a related table

    - by Jack
    Hi, I have a csv importing system on my app (used locally only) which parses the csv file line by line and adds the data to the database table. This is based on a tutorial here. require 'csv' def csv_import @parsed_file=CSV::Reader.parse(params[:dump][:file]) n = 0 @parsed_file.each_with_index do |row, i| next if i == 0 #ignore the first row course = Course.new course.title = row[0] course.unit_code = row[1] course.course_type = row[2] course.value = row[3] course.pass_mark = row[4] if course.save n = n+1 GC.start if n%50==0 end flash.now[:message] = "CSV Import Successful, #{n} new courses added to the database." end redirect_to(courses_url) end This is all in the courses controller and works fine. There is a relationship that courses HABTM years and years HABTM courses. In the csv file (effectively in row[5] to row[8]) are the year_id s. Is there a way that I can add this within the method above. I am confused as to how to loop over the 4 items and add them to the courses_years table. Thank you Jack

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • Is the first persistance of an Entity Data Model in EF 4.0 slower due to the connection cost ?

    - by Scott Davies
    Hi, I've got a console app written that persists an object graph via Entity Framework 4.0. I loop through this to dump the execution times for each persistance. The first persistance is always the largest. Is this due to EF making the initial connection to the database and/or JIT'ing ? Here's a sample of the output: Persisted graph in **3318** millseconds. Persisted graph in 25 millseconds. Persisted graph in 26 millseconds. Persisted graph in 22 millseconds. Thanks, Scott

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • T-SQL query with date range

    - by Moo
    Hi, I have a fairly weird 'bug' with a simple query, and I vaguely remember reading the reason for it somewhere a long time ago but would love someone to refresh my memory. The table is a basic ID, Datetime table. The query is: select ID, Datetime from Table where Datetime <= '2010-03-31 23:59:59' The problem is that the query results include results where the Datetime is '2010-04-01 00:00:00'. The next day. Which it shouldn't. Anyone? Cheers Moo

    Read the article

  • Built in Analyzer in Xcode 3.1.4

    - by Mustafa
    Hi all, I wonder if the built in Analyzer in Xcode 3.1.4 makes it redundant to use LLVM/Clang Static Analyzer separately? Please refer to the original article here: Finding memory leaks with the LLVM/Clang Static Analyzer Thanks.

    Read the article

  • Can managed and unmanaged C++/MFC be mixed in one dll?

    - by Walter Williams
    Previously we had software in MFC (VC6), VB6 and C# applications that needed to call the same engine written in C++ (and MFC). The engine required C++ for speed. At the time we decided to use COM as the interface because all three could use it with the least issues in marshalling, etc. Our MFC application is now deprecated and we have recently decided to dump VB6, so what we've got left is C#. We can just leave the COM engine as-is, but it would be nice to get away from COM registration, etc., and have a managed interface to work with. COM registration occasionally causes support issues if there is something wrong with the person's machine. Is it possible to have a dll with the existing unmanaged C++/MFC, and a .NET front end interface?

    Read the article

  • How can user change the jre parameter values after the exe is generated in Launch4j?

    - by Wing C. Chen
    Is it possible to change the jre parameter values after the exe file is generated through Launch4j? The ideal scenario is like this: The default parameter values are applied when the program is started. However, when the user wants to change some jre parameter values, he goes to a .ini file, MyProgram.ini for example, changes the values there, and the new values will be applied next time the program is started. I think eclipse uses the same way for its memory and some other parameter settings.

    Read the article

  • What does LAME text does in MP3 file?

    - by Dims
    I see here http://en.wikipedia.org/wiki/MP3 that MP3 file consists of MP3 headers interchanged with MP3 data. MP3 header consist of few bytes. But here is my MP3 file dump with ID3 tag cut. Header is highlighted with blue. You can see that "LAME3.96" text is highlighted with green. What does it does there? Is this a part of MP3 elementary stream? Or this is the part of some headers I didn't tag?

    Read the article

  • Filtering Attributes with Weka

    - by hrzafer
    Hi eveyone! I have a simple question about filtering attributes in WEKA. Let's say I have 500 attributes 30 classes and 100 samples for each class which equals 3000 rows and 500 columns. This causes time and memory problems a you can guess. How do I filter attributes that occur only once or twice (or n times) in 3000 rows. And is it a good idea? Thank you

    Read the article

  • Summary of the last decade of garbage collection?

    - by Ben Karel
    I've been reading through the Jones & Lin book on garbage collection, which was published in 1996. Obviously, the computing world has changed dramatically since then: multicore, out-of-order chips with large caches, and even larger main memory in desktops. The world has also more-or-less settled on the x86 and ARM microarchitectures for most consumer-facing systems. How has the field of garbage collection changed since the seminal book was published?

    Read the article

  • Determine compile options from load module - IBM Enterprise COBOL

    - by NealB
    How can I determine the compile options used to compile an IBM Enterprise COBOL program by looking at the load module? When a dump is issued they are listed as follows: Compile Options for PROGXX: ADV, ARITH(COMPAT), AWO, NOCICS, CODEPAGE(01140), DATA(31), NODATEPROC, NODBCS, NODLL, NODYNAM, NOEXPORTALL, NOFASTSRT, INTDATE(LILIAN), NUMPROC(NOPFD), NOOPTIMIZE, OUTDD(SYSOUT), PGMNAME(COMPAT), RENT, RMODE(AN NOSQL, SQLCCSID, SSRANGE, NOTEST, NOTHREAD, TRUNC(OPT), XMLPARSE(XMLSS), YEARWINDOW(1900), ZWB so I presume they must be tucked away somewhere in the load module. I want to scan a load library checking that each load was compiled with some specific options to ensure compliance to shop standard (eg. SSRANGE). Any ideas would be appreciated.

    Read the article

  • Good Postgres graphical client for Windows

    - by alex
    The name pretty much says it all. Right now I'm using Squirrel - it crashes frequently and suffers from memory problems (I've tried increasing the heap size). I don't need anything particularly fancy or full-featured - just something that won't take up 2.4 GB of RAM to store a 1.5 million line, 8 column result set.

    Read the article

  • BigInteger.pow(BigInteger) ?

    - by PeterW
    I'm playing with numbers in Java, and want to see how big a number I can make. It is my understanding that BigInteger can hold a number of infinite size, so long as my computer has enough Memory to hold such a number, correct? My problem is that BigInteger.pow accepts only an int, not another BigInteger, which means I can only use a number up to 2,147,483,647 as the exponent. Is it possible to use the BigInteger class as such? BigInteger.pow(BigInteger) Thanks.

    Read the article

  • How do you drop in substitute JRE classes?

    - by evilfred
    Hi, java.util.zip has well-known problems with native memory usage, so i'm trying to use a drop-in replacement called "jazzlib". unfortunately as is typical for sourceforge projects there is no documentation. If I add the jar to my classpath then Java freaks out and gives me "prohibited package name" errors because it replaced java.util.zip. How do I tell Java that this is what I want it to do?

    Read the article

  • Keeping a database structure up to date in a project where code is on subversion?

    - by Bruno De Barros
    I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward. Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this. And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?

    Read the article

  • What's holding up my PHP script?

    - by gAMBOOKa
    We've got a PHP crawler running on our web server. When the crawler is running, there are no cpu, memory or network bandwidth spikes. Everything is normal. But our website (also PHP), hosted on the same server, stops responding. Basically the crawler blocks any other php script from running. What could be the problem? EDIT: ** fsockopen is being used to download files to crawler! **

    Read the article

  • Why should i use EJB?

    - by Nitesh Panchal
    Hello, As we all know that EJB's in 3.1 or 3.0 are simple POJOs. You just need to give annotations here and there and it gets converted to EJB from simple class. So, now my question is why should i use EJBs at all? Can i not do without them? In .Net i created class library and got things done. I never felt the need for anything like EJB. Simple classes were enough. Then, why in Java people stress on EJB? What is difference between a simple POJO and EJB in terms of execution and memory? Further which function should i write in EJB and which should i write in simple class? Should i dump every function in EJB only? or there is some kind of strategy? Does EJB provide anything special?

    Read the article

  • How can I translate my programmatic WCF configuration into app.config

    - by ofer
    Hi, I have an self hosted WCF server with hard coded configurations. the server worked fine until I tried to implement some new functionality. the new setting did not work (urrr.... ) and I find it hard to locate where are the problems in my code. instead of digging inside the code, I thought about different approach: Is there any way to dump those hard coded WCF configuration (the entire ) into app.config like text file after all configurations are loaded? this will enable me to have a easy global view of the entire settings .. mmm .. .by the way, does anyone know a way that will do the translation to the opposite direction? config to code. Any advice will be welcomed! ofer

    Read the article

  • lightweight publish/subscribe framework in java

    - by mdma
    Is there a good lightweight framework for java that provides the publish/subscribe pattern? Some ideal features Support for generics Registration of multiple subscribers to a publisher API primarily interfaces and some useful implementations purely in-memory, persistence and transaction guarantees not required. I know about JMS but that is overkill for my need. The publish/subscribed data are the result of scans of a file system, with scan results being fed to another component for processing, which are then processed before being fed to another and so on.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >