Search Results

Search found 12715 results on 509 pages for 'memory profiling'.

Page 312/509 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Saving objects in servlet session and java.io.NotSerializableException

    - by EugeneP
    SEVERE: IOException while loading persisted sessions: java.io.WriteAbortedException: writi ng aborted; java.io.NotSerializableException: That means this object cannot be persisted on hard disk. Does it imply that it's not safe to keep in Session objects that do not implement "Serializable"? I haven't heard that there are limitations on saving non-serializable objects in Session object. It simply means that Tomcat will always keep them in memory, right?

    Read the article

  • Find out how much storage a row is taking up in the database

    - by Vaccano
    Is there a way to find out how much space (on disk) a row in my database takes up? I would love to see it for SQL Server CE, but failing that SQL Server 2008 works (I am storing about the same data in both). The reason I ask is that I have a Image column in my SQL Server CE db (it is a varbinary[max] in the SQL 2008 db) and I need to know now many rows I can store before I max out the memory on my device.

    Read the article

  • I need a 3ds loader for opengl

    - by Shaza
    Hey, I have an opengl project with C++, and I need to load like 10 3ds objects in my scene with their texture on, but unfortunatly the loader I'm using now is causing memory leakage, I knew that when my scene freezed after running the project by one min, so can you suggest a 3ds loader which can be very effective in loading a big number of 3ds objects??

    Read the article

  • How to create newline in a rebol block ?

    - by Rebol Tutorial
    let's say I have a config.txt which contains: "param11" "param12" "param21" "param22" I'll load it in memory with config: load %config.txt I can save it back with save %config.txt config So far so good. Now the problem occurs for me when I want to add "param31" "param32" I have tried append config reduce [newline "param31" "param32"] save %config.txt config But that doesn't give the expected result "param11" "param12" "param21" "param22" "param31" "param32" but this instead "param11" "param12" "param21" "param22" #"^/" "param31" "param32" So how to ?

    Read the article

  • Do I have to call release on an objective-c retain class variable when setting it to a new object?

    - by Andrew Arrow
    Say I have: @property (nonatomic,retain) NSString *foo; in some class. And I call: myclass.foo = [NSString stringWithString:@"string1"]; myclass.foo = [NSString stringWithString:@"string2"]; Should I have called [myclass.foo release] before setting it to "string2" to avoid a memory leak? Or the fact that nothing is pointing to the first "string1" object anymore is good enough? And in the dealloc method [foo release] will be called.

    Read the article

  • fetchBatchSize to be same as fetchLimit

    - by user1730622
    What does it mean to have fetchBatchSize to be the same as fetchLimit, say both are set to be 5. My understanding is that, with the fetchLimit, then only 5 records will be in the fetch result set; and additionally with the fetchBatchSize, only the ids/identities of the records will be read to the memory, and then the full records won't be retrieved until they are accessed. Is that a correct understanding?

    Read the article

  • Jaxb to generate the XML directly to the OutputStream

    - by sonu
    Hi, I have a 500Mb csv file. I need to convert it into XML file. I am using the Jaxb to created the xml file. It is working fine for small amout of data. but for large amout of data like 300 mb it is throwing out of memory exception. Can anyone tell me that How can I create each element and write it into a file without creating the whole tree using the jaxb?" Thanks Sonu

    Read the article

  • SQL Server 2008 - Query takes forever to finish even though work is actually done

    - by Brian
    Running the following simple query in SSMS: UPDATE tblEntityAddress SET strPostCode= REPLACE(strPostCode,' ','') The update to the data (at least in memory) is complete in under a minute. I verified this by performing another query with transaction isolation level read uncommitted. The update query, however, continues to run for another 30 minutes. What is the issue here? Is this caused by a delay to write to disk? TIA

    Read the article

  • Can the .NET MethodInfo cache be cleared or disabled?

    - by Anton
    Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again. I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata. Is there any way to clear or disable this MemberInfo cache?

    Read the article

  • Optimizing a "set in a string list" to a "set as a matrix" operation

    - by Eric Fournier
    I have a set of strings which contain space-separated elements. I want to build a matrix which will tell me which elements were part of which strings. For example: "" "A B C" "D" "B D" Should give something like: A B C D 1 2 1 1 1 3 1 4 1 1 Now I've got a solution, but it runs slow as molasse, and I've run out of ideas on how to make it faster: reverseIn <- function(vector, value) { return(value %in% vector) } buildCategoryMatrix <- function(valueVector) { allClasses <- c() for(classVec in unique(valueVector)) { allClasses <- unique(c(allClasses, strsplit(classVec, " ", fixed=TRUE)[[1]])) } resMatrix <- matrix(ncol=0, nrow=length(valueVector)) splitValues <- strsplit(valueVector, " ", fixed=TRUE) for(cat in allClasses) { if(cat=="") { catIsPart <- (valueVector == "") } else { catIsPart <- sapply(splitValues, reverseIn, cat) } resMatrix <- cbind(resMatrix, catIsPart) } colnames(resMatrix) <- allClasses return(resMatrix) } Profiling the function gives me this: $by.self self.time self.pct total.time total.pct "match" 31.20 34.74 31.24 34.79 "FUN" 30.26 33.70 74.30 82.74 "lapply" 13.56 15.10 87.86 97.84 "%in%" 12.92 14.39 44.10 49.11 So my actual questions would be: - Where are the 33% spent in "FUN" coming from? - Would there be any way to speed up the %in% call? I tried turning the strings into factors prior to going into the loop so that I'd be matching numbers instead of strings, but that actually makes R crash. I've also tried going for partial matrix assignment (IE, resMatrix[i,x] <- 1) where i is the number of the string and x is the vector of factors. No dice there either, as it seems to keep on running infinitely.

    Read the article

  • How to use length indicator in a C++ program

    - by cj
    I want to make a program in C++ that reads a file where each field will have a number before it that indicates how long it is. The problem is I read every record in object of a class; how do I make the attributes of the class dynamic? For example if the field is "john" it will read it in a 4 char array. I don't want to make an array of 1000 elements as minimum memory usage is very important.

    Read the article

  • Visual Studio Solution: static or shared projects?

    - by goodrone
    When a whole project (solution) consists of multiple subprojects (.vcproj), what is a preferable way to tie them: as static libraries or as shared libraries? Assuming that those subprojects are not used elsewhere, the shared libraries approach shouldn't decrease memory usage or load time.

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >