Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 70/737 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Serving large generated files using Google App Engine?

    - by John Carter
    Hiya, Presently I have a GAE app that does some offline processing (backs up a user's data), and generates a file that's somewhere in the neighbourhood of 10 - 100 MB. I'm not sure of the best way to serve this file to the user. The two options I'm considering are: Adding some code to the offline processing code that 'spoofs' it as a form upload to the blob store, and going thru the normal blobstore process to serve the file. Having the offline processing code store the file somewhere off of GAE, and serving it from there. Is there a much better approach I'm overlooking? I'm guessing this is functionality that isn't well suited to GAE. I had thought of storing in the datastore as db.Text or Dd.Blob but there I encounter the 1 MB limit. Any input would be appreciated,

    Read the article

  • Visual Studio Solution: static or shared projects?

    - by goodrone
    When a whole project (solution) consists of multiple subprojects (.vcproj), what is a preferable way to tie them: as static libraries or as shared libraries? Assuming that those subprojects are not used elsewhere, the shared libraries approach shouldn't decrease memory usage or load time.

    Read the article

  • How to autodoc .Net Google code projects?

    - by Remus Rusanu
    I know how to generate html documentation using Sandcastle and similar tools. But if I want to host the project on Google code, how can I easily publish the documentation straight into the Google project Wiki pages? I can see the SVN repository has a wiki folder which I assume maps to the project Wiki pages and I guess I can make a build step to build the documentation from the autodoc tags. But is there some tool that generates wiki compatible format from the code documentation tags?

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • Visually [Diagrams/movies] Display events and states of System using old logs of Application to trac

    - by singh
    Hi In My Application many process runs. Having multiple threads in each process. Whenever any bug comes, I have to analyze thousands of lines of logs to know exact root-cause. I am thinking of an idea, Please guide me if it is feasible and good enough Each log have Process name , Thread id , and Time stamp along with message. If I can create a intelligence to capture what is happing with time and display in a movie which user can view it like a normal video. Forward – backward etc. It will help to reach root cause fast and bug tracing will become easy. Please guide me on library in c++ using which I can draw diagrams and display as movie

    Read the article

  • Tools for viewing logs of unlimited size

    - by jkff
    It's no secret that application logs can go well beyond the limits of naive log viewers, and the desired viewer functionality (say, filtering the log based on a condition, or highlighting particular message types, or splitting it into sublogs based on a field value, or merging several logs based on a time axis, or bookmarking etc.) is beyond the abilities of large-file text viewers. I wonder: Whether decent specialized applications exist (I haven't found any) What functionality might one expect from such an application? (I'm asking because my student is writing such an application, and the functionality above has already been implemented to a certain extent of usability)

    Read the article

  • Visaul Studio 2010, TlbImp generates .net 4.0 interops in 2.0 projects

    - by DJScrib
    In a C# project we add a reference to a COM object via the Add References setup pointing to a COM object which results in the IDE auto-generating the interop assembly. So this is fine and good, but we are building based on .net 3.5 SP1 aka CLR 2.0, and the generated interops are using the 4.0 CLR making them incompatiable. Is there a way to prevent this? I assume the other option is in our build script to try using tlbimp.exe with the /references parameter? to point to mscorlib v2.0? Anyhow, I'm hoping there's a flag somewhere to allow this.

    Read the article

  • how to use cpp source for 2 projects

    - by joels
    I'm not sure if I am going about this the right way. I am making some c++ classes to use in 2 apps. I am going to have to compile them to be used in a Cocoa app and later be compiled for use with fastcgi. Should I create a dynamic library?

    Read the article

  • VS: Separating headers from source files?

    - by jco
    I know this is completely subjective, but I'm curious: do you use separate filters for headers and source files in your Visual Studio solutions? Visual Studio creates "Header Files" and "Source Files" filters by default. To me, this dichotomy causes more annoyance than anything else. What's your take on this?

    Read the article

  • How to manage large amounts of delegates and usercallbacks in C# async http library

    - by Tyler
    I'm coding a .NET library in C# for communicating with XBMC via its JSON RPC interface using HTTP. I coded and released a preliminary version but everything is done synchronously. I then recoded the library to be asynchronous for my own purposes as I was/am building an XBMC remote for WP7. I now want to release the new async library but want to make sure it's nice and tidy before I do. Due to the async nature a user initiates a request, supplies a callback method that matches my delegate and then handles the response once it's been received. The problem I have is that within the library I track a RequestState object for the lifetime of the request, it contains the http request/response as well as the user callback etc. as member variables, this would be fine if only one type of object was coming back but depending on what the user calls they may be returned a list of songs or a list of movies etc. My implementation at the moment uses a single delegate ResponseDataRecieved which has a single parameter which is a simple Object - As this has only be used by me I know which methods return what and when I handle the response I cast said object to the type I know it really is - List, List etc. A third party shouldn't have to do this though - The delegate signature should contain the correct type of object. So then I need a delegate for every type of response data that can be returned to the third party - The specific problem is, how do I handle this gracefully internally - Do I have a bunch of different RequestState objects that each have a different member variable for the different delegates? That doesn't "feel" right. I just don't know how to do this gracefully and cleanly.

    Read the article

  • Browsers (IE and Firefox) freeze when copying large amount of text

    - by Matt
    I have a web application - a Java servlet - that delivers data to users in the form of a text printout in a browser (text marked up with HTML in order to display in the browser as we want it to). The text does display in different colors, though most of it is black. One typical mode of operation is this: 1. User submits a form to request data. 2. Servlet delivers HTML file to browser. 3. User does CTRL+A to select all the text. 4. User does CTRL+C to copy all the text. 5. User goes to a text editor and does CTRL+V to paste the text. In the testing where I'm having this problem, step #2 successfully loads all the data - we wait for that to complete. We can scroll down to the end of what the browser loaded and see the end of the data. However, the browser freezes on step #3 (Firefox) or on step #4 (IE). Because step #2 finishes, I think it is a browser/memory issue, and not an issue with the web application. If I run queries to deliver smaller amounts of data (but after several queries we get the same data we would have above in one query) and copy/paste this text, the file I save it into ends up being about 8 MB. If I save the browser's displayed HTML to a file on my computer via File-Save As from the browser menu, it works fine and the file is about 22 MB. We've tried this on 2 different computers at work (both running Windows XP, with at least 2 GB of RAM and many GB of free disk space), using Firefox and IE. We also tried it on a home computer from a home network outside of work (thinking it might be our IT security software causing the problem), running Windows 7 using IE, and still had the problem. When I've done this, I can see whatever browser I'm using utilizing the CPU at 50%. Firefox's memory usage grows to about 1 GB; IE's stays in the several hundred MBs. We once let this run for half an hour, and it did not complete. I'm most likely going to modify the web app to have an option of delivering a plain text file for download, and I imagine that will get the users what they need. But for the mean time, and because I'm curious - and I don't like my application freezing people's browsers, does anyone have any ideas about the browser freezing? I understand that sometimes you just reach your memory limit, but 22 MB sounds to me like an amount I should be able to copy to the clipboard.

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • Handling very large lists of objects without paging?

    - by user246114
    Hi, I have a class which can contain many small elements in a list. Looks like: public class Farm { private ArrayList<Horse> mHorses; } just wondering what will happen if the mHorses array grew to something crazy like 15,000 elements. I'm assuming that trying to write and read this from the datastore would be crazy, because I'd get killed on the serialization process. It's important that I can get the entire array in one shot without paging, and each Horse element may only have two string properties in it, so they are pretty lightweight: public class Horse { private String mId; private String mName; } I don't need these horses indexed at all. Does it sound reasonable to just store the mHorse array as a raw Text field, and force my clients to do the deserialization? Something like: public class Farm { private Text mHorsesSerialized; } then whenever the client receives a Farm instance, it has to take the raw string of horses, and split it in order to reinstantiate the list, something like: // GWT client perhaps Farm farm = rpcCall.getMyFarm(); String horsesSerialized = farm.getHorses(); String[] horseBlocks = horsesSerialized.split(","); for (int i = 0; i < horseBlocks.length; i++) { // .. continue deserializing the individual objects ... } yeah... so hopefully it would be quick to read a Farm instance from the datastore, and the serialization penalty is paid by the client, Thanks

    Read the article

  • Problems opening large csv file

    - by John Tyler
    I have a csv file that is 100mb in size. I need to parse some data out of it into a new format. I tried PHP, but keep running into memory issues. After around the first 150 "rows" or so, the script poops out. This is even on the localhost, and doing everything I can to tune the PHP settings, including max_memory and script_execution_time. Now before I continue, I'd like to know if Python will poop out on me too. Or if I will have to use C++. Can someone name good csv libraries for for these programmin langueage? The file is quoted csv. I mean scheiza I can't even open this text file in OpenOffice without it dying on me. (then again, Java sux as bad as PHP)

    Read the article

  • Best way to laod a large file into arraylist in java

    - by user1730833
    I have a file whose size is about 300mb i want to read the contents line by line and then add it into arraylist. So i have made an object of array list a1 , then reading the file using bufferedreader , after that when i add the lines from file into arraylist it gives an error Exception in thread "main" java.lang.OutOfMemoryError: Java heap space. Please tell me what should be the solution for this.

    Read the article

  • VS 2008 - Procedure to ship C#/WPF solution to ensure compatibility

    - by Bill
    I am attempting to collaborate on a C#/WPF project with another developer remotely via e-mail; and although the code compiles perfectly when it leaves, my collaborator has not been able to compile the code on his side. We are both using VS 2008 Version 9. This is the first time trying to work with someone else on an application and I was hoping that someone would advise me if there are any suggestions to obtain and ensure compatibility between the two of us? Additionally, is there a recommended procedure to prepare the solution for shipment (ie. just zip up the solution folder? export the application? etc.)? Thanks very much.

    Read the article

  • utl_file.FCLOSE() is slow with large files

    - by Dan
    We are using utl_file in Oracle 10g to copy a blob from a table row to a file on the file system and when we call utl_file.fclose() it takes a long time. It's a 10mb file, not very big, and it takes just over a minute to complete. Anyone know why this would be so slow? Thanks

    Read the article

  • Enumerating large (20-digit) [probable] prime numbers

    - by Paul Baker
    Given A, on the order of 10^20, I'd like to quickly obtain a list of the first few prime numbers greater than A. OK, my needs aren't quite that exact - it's alright if occasionally a composite number ends up on the list. What's the fastest way to enumerate the (probable) primes greater than A? Is there a quicker way than stepping through all of the integers greater than A (other than obvious multiples of say, 2 and 3) and performing a primality test for each of them? If not, and the only method is to test each integer, what primality test should I be using?

    Read the article

  • Representing a very large array of bits in little memory

    - by user614624
    Hello, I would like to represent a structure containing 250 M states(1 bit each) somehow into as less memory as possible (100 k maximum). The operations on it are set/get. I cold not say that it's dense or sparse, it may vary. The language I want to use is C. I looked at other threads here to find something suitable also. A probabilistic structure like Bloom filter for example would not fit because of the possible false answers. Any suggestions please?

    Read the article

  • firefox does not load large size images

    - by Pradeep
    I am stuck with a kind of bug in FF, wherein it’s unable to load images of big size (I have 8 MB size of image) from the server. The loading of image is all fine on IE. I am still looking out for ways to get rid of this problem. I changed server(IIS) settings to allow bigger file sizes. Also, I used “load” event on image using JQuery and tried all sort of options listed here http://api.jquery.com/load-event/, but nothing worked so far. If anyone of you has come across any such similar problem, and a way to resolve it, it would be nice to hear from you Please note: high resolution images are part of the requirement. Code : <style> img { background-color: #FFFFFF; background-image: url(http://eremurus.hyd:8080/QMS/plugin/imagepanner/loader.gif); background-repeat: no-repeat; background-position: center center; } </style> <script src="../plugin/jquery-ui-1.8.7.custom/js/jquery-1.4.4.min.js" type="text/javascript"></script> <script> jQuery(document).ready(function($){ ///var _url = "http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"; // set up the node / element _im =$("#main"); //_im.bind("load",function(){ $(this).fadeIn(); }); // set the src attribute now, after insertion to the DOM //_im.attr('src',_url); $("#main").one("load",function(){ alert('loaded'); }) .each(function(){ if(this.complete){ $(this).trigger("load"); } }); }); </script> </head> <body> <div id="target"><img id='main' src="http://eremurus.hyd:8080/QMS/plugin/imagepanner/floorPlan.jpg"> </img></div> </body> </html>

    Read the article

  • deleting a large number of rows from a table

    - by Azeem
    We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.

    Read the article

  • How do I correlate build configurations in dependant vcproj files with different names?

    - by Tim
    I have a solution file that requires a third party library (open source). The containing solution uses the typical configuration names of "Debug" and "release". The 3rd party one has debug and release configs for both DLL and static libs - their names are not "Debug" and "Release". How do I tell the solution to build the dependency first and how do I correlate which config to the dependant config? i.e. MyProject:Debug should build either 3rdParty:debug_shared or 3rdParty:debug_static

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >