Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 65/737 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • importing a large txt file in MySQL ?

    - by Taz
    Hi I am loading a text data in MySQL using the following command 'mysql> Load Data local Infile 'C:\\Documents and Settings\\Scan\\My Documents\\D ownloads\\instance_types_en.nt\\Copy of instance_types_en.txt' into table dbpedi aentities.resources fields terminated by ' ' lines terminated by 'rn';' Data is like (actually there is a newline after '.') <a> <b> <c> . <a> <b> <c> . <a> <b> <c> . <a> <b> <c> .<a> <b> <c> . <a> <b> <c> . Table has and auto increment ID field and then text fields for all three values. File size is about 750MB The problems are 1. appears to be in first text field 2. only 2MB data is imported

    Read the article

  • Make process crash on large memory allocation

    - by Pieter
    I'm trying to find a significant memory leak (15MB at a time, but doing allocations like this on multiple places). I checked the most obvious places, and then used AQTime, but I still can't pinpoint it. Now I see 2 options left: 1) Use SetProcessWorkingSetSize: I've tried this but my process happily keeps on running when using up more then 150MB: DWORD MemorySize = 150*1024*1024; SetProcessWorkingSetSize( GetCurrentProcess(), MemorySize/2, MemorySize*2 ); 2) Put a breakpoint when allocating more then 1MB at a time. How should I do this, overload operator new with an 'if1MB' inside ?

    Read the article

  • Using Large Lists

    - by cam
    In an Outlook AddIn I'm working on, I use a list to grab all the messages in the current folder, then process them, then save them. First, I create a list of all messages, then I create another list from the list of messages, then finally I create a third list of messages that need to be moved. Essentially, they are all copies of eachother, and I made it this way to organize it. Would it increase performance if I used only one list? I thought lists were just references to the actual item.

    Read the article

  • C++ a class with an array of structs, without knowing how large an array I need

    - by Dominic Bou-Samra
    New to C++, and for that matter OO programming. I have a class with fields like firstname, age, school etc. I need to be able to store other information like for instance, where they have travelled, and what year it was in. I cannot declare another class specifically to hold travelDestination and what year, so I think a struct might be best. This is just an example: struct travel { string travelDest; string year; }; The issue is people are likely to have travelled different amounts. I was thinking of just having an array of travel structs to hold the data. But how do I create a fixed sized array to hold them, without knowing how big I need it to be? Perhaps I am going about this the completely wrong way, so any suggestions as to a better way would be appreciated.

    Read the article

  • Recommendations on Triming Large Amounts of Text from a DOM Object

    - by aronchick
    I'm doing some in browser editing, and I have some content that's on the order of around 20k characters long in a <pre>. So it looks something like: <pre> Text 1 Text 2 Text 3 Text 4 [...] Text 20,000 </pre> I'd like to use jquery to trim it down when someone hits a button to chop, but I'm having trouble doing it without overloading the browser. Assume I know that the character numbers are at 16,510 - 17,888, and what I'd like to do is trim it. I was using: jQuery('#textsection').html(jQuery('textarea').html().substr(range.start)); But browsers seem to enjoy crashing when I do this. Alternatives?

    Read the article

  • ZipArchive memory problems on iPhone for large archive

    - by Mithin
    Hi, I am trying to compress multiple files into a single zip archive and I am running into low memory warning. Since the complete zip file is loaded into the memory I guess that's the problem. Is there a way by which I can manage the compression/decompression better using ZipArchive so that not all the data is in the memory at once? Thanks!

    Read the article

  • Efficiently finding the shortest path in large graphs

    - by Björn Lindqvist
    I'm looking to find a way to in real-time find the shortest path between nodes in a huge graph. It has hundreds of thousands of vertices and millions of edges. I know this question has been asked before and I guess the answer is to use a breadth-first search, but I'm more interested in to know what software you can use to implement it. For example, it would be totally perfect if it already exist a library (with python bindings!) for performing bfs in undirected graphs.

    Read the article

  • What would you recommend for a large-scale Java data grid technology: Terracotta, GigaSpaces, Cohere

    - by cliff.meyers
    I've been reading up on so-called "data grid" solutions for the Java platform including Terracotta, GigaSpaces and Coherence. I was wondering if anyone has real-world experience working any of these tools and could share their experience. I'm also really curious to know what scale of deployment people have worked with: are we talking 2-4 node clusters or have you worked with anything significantly larger than that? I'm attracted to Terracotta because of its "drop in" support for Hibernate and Spring, both of which we use heavily. I also like the idea of how it decorates bytecode based on configuration and doesn't require you to program against a "grid API." I'm not aware of any advantages to tools which use the approach of an explicit API but would love to hear about them if they do in fact exist. :) I've also spent time reading about memcached but am more interested in hearing feedback on these three specific solutions. I would be curious to hear how they measure up against memcached in the event someone has used both.

    Read the article

  • Standard Practice for Continuous Integration of Maven Multi-module projects

    - by James Kingsbery
    I checked around, and couldn't find a good answer to this: We have a multi-module Maven project which we want to continuously integrate. We thought of two strategies for handling this: Have our continuous integration server (TeamCity, in this case, but I've used others before and they seem to have the same issue) point to the aggregator POM file, and just build everything Have our continuous integration server point at each individual module Is there a standard, preferred practice for this? I've checked Stack Overflow, Google, the Continuous Integration book, and did not find anything, but maybe I missed it.

    Read the article

  • Learning Spring MVC For web-projects

    - by John
    I have looked at Spring MVC a few times briefly, and got the basic ideas. However whenever I look closely it seems to require you already know a whole load of 'core Spring'. The book I have for instance has a few hundred pages before it gets onto Spring MVC... which seems a lot to wade through. I'm used to being able to jump in, but there's so much bean-related stuff and XML, it just looks like a mass of data to consume. Does it simplify if you put the time in, or is Spring just a much bigger framework than I thought? is it possible to learn this side of it in isolation?

    Read the article

  • Using svn with xampp for php projects

    - by idrish
    I have heard a lot about version control and would like to work on it. I read some tutorials about the same. However i am not quite sure how svn works with xampp. I have installed svn, Tortoise svn and made the necessary changes in xampp. For instance i copied the two required modules to c:/xampp/apache/modules and also made changes to the conf file in apache. Here are the changes made in c:/xampp/apache/conf/httpd.conf. # Configure Subversion repository <Location /svn> DAV svn SVNPath “C:\svn” AuthType Basic AuthName “Subversion repository” AuthUserFile “c:\svn_conf\passwd” Require valid-user </Location>t I created the repository at c:/svn and also created the password file. However when i visit http:/localhost/svn i get a 404 page not found error. Where am i going wrong. what am i missing.? Any pointers?? Thanks in advance.

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • Large Scale VHDL techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Best language for scripting large scale file management

    - by Dan
    The National Park Service's Natural Sounds Program collects multiple terabytes of data each year measuring soundscapes. In your opinion, what is best available scripting language to manage massive amounts of files and file types? We would like to easily design and run efficient user-friendly scripts to search for and retrieve/create copies of files that may be located in different directories according a single static hierarchy. The OS will most likely be windows. Thanks!

    Read the article

  • How to open a large text file in C#

    - by desmati
    I have a text file that contains about 100000 articles. The structure of file is: BEGIN OF FILE .Document ID 42944-YEAR:5 .Date 03\08\11 .Cat political Article Content 1 .Document ID 42945-YEAR:5 .Date 03\08\11 .Cat political Article Content 2 END OF FILE I want to open this file in c# for processing it line by line. I tried this code: String[] FileLines = File.ReadAllText(TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • What is the largest file size we can transfer through air application?

    - by Naveen kumar
    Hi all, I'm trying to transfer large file(1Gb+) using UDP(in packets) through air application. I'm transfering byteArray by taking chunks of packets from FileStream. But its giving 'Error #1000: The system is out of memory' at sender side after certain number of packets sent and by this time the downloaded file size at server side is 256 MB. I tried with other files but after downloading 256MB, sender is giving the same error. Is it because of the file stream size? How can I solve this problem so that I can transfer files of GB size.

    Read the article

  • How do you handle large repeated UI elements with JQuery

    - by jpoz
    Howdy, Here's the situation: You have a very complex UI element that is repeated in a list. Each has a menu on it, buttons, it hides and shows subelements, buttons for switch it's state, etc, etc. The elements are populated via JSON so you have to construct the elements and the functionality of the fly. What's the best way to accomplish this with JQuery? Where would you save the reusable template for the DOM structure? How would you add the behavior on? $().live? .livequery? onclick? manual after every JSON get? I guess I just see a lot of people doing different things. What's your experience with performance? Any insight would be much appreciated. Thanks, JPoz

    Read the article

  • Supporting more than one codebase in ANSI-C

    - by Ilker Murat Karakas
    I am working on a project, with an associated Ansi-C code base. (let me call this the 'main' codebase). I now am confronted with a typical problem (stated below), which I believe I would be able to solve much easily if I had an object-oriented language at hand. The problem is this: I will have to start more than one codebases; i.e. I will have to start supporting a parallel codebase (even maybe more in the future). The initial codebases for all the new (i.e. parallel) codebases will initially be identical as the old (i.e. 'main') codebase. As we are talking about the 'C' language, I have till now been thinking of adding '#ifdef' statements to code, and writing the branch-spacific code inside those 'ifdef' blocks. Hoping that I made the problem clear (enough!), I would like to hear thoughts on clever patterns that would help me handle this problem elegantly in Ansi C. Cheers

    Read the article

  • Web Projects gets auto check out on build.

    - by chugh97
    I am using VS2008 with TFS 2008 and I have a web application project which gets auto check out on build.How can this be avoided? I dont want to change my Source Control changes which are auto check out on edit. When I check in the file it says file are idential, no changes...Any pointers

    Read the article

  • Moving Git Projects between computers

    - by 01
    I have a project that i use at two places(i dont use git server). When i copy the project at second place i have to check-in all the files(but they have not changed), git shows me for example @@ -1,8 +1,8 @@ -#Sat Mar 06 19:39:27 CET 2010 -eclipse.preferences.version=1 -org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled -org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.6 -org.eclipse.jdt.core.compiler.compliance=1.6 -org.eclipse.jdt.core.compiler.problem.assertIdentifier=error -org.eclipse.jdt.core.compiler.problem.enumIdentifier=error -org.eclipse.jdt.core.compiler.source=1.6 +#Sat Mar 06 19:39:27 CET 2010 +eclipse.preferences.version=1 +org.eclipse.jdt.core.compiler.codegen.inlineJsrBytecode=enabled +org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.6 +org.eclipse.jdt.core.compiler.compliance=1.6 +org.eclipse.jdt.core.compiler.problem.assertIdentifier=error +org.eclipse.jdt.core.compiler.problem.enumIdentifier=error +org.eclipse.jdt.core.compiler.source=1.6 i did at both places the command git config --global core.autocrlf false but it doesnt help with this problem

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >