Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 396/670 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Do I have to call release on an objective-c retain class variable when setting it to a new object?

    - by Andrew Arrow
    Say I have: @property (nonatomic,retain) NSString *foo; in some class. And I call: myclass.foo = [NSString stringWithString:@"string1"]; myclass.foo = [NSString stringWithString:@"string2"]; Should I have called [myclass.foo release] before setting it to "string2" to avoid a memory leak? Or the fact that nothing is pointing to the first "string1" object anymore is good enough? And in the dealloc method [foo release] will be called.

    Read the article

  • When should I use SATA 6gb/s?

    - by Gili
    I purchased a Baraccuda hard-drive (model ST3000DM001) that supports a maximum read transfer rate of 210 MB/s and SATA 1.5/3/6 Gb/s. My motherboard has a limited number of 6 Gb/s ports so I'd like to reserve them for when it's really necessary. When does a hard-drive benefit from a SATA 6 Gb/s port? Doesn't it require a transfer speed of at least 375 MB/s to surpass the limit of SATA 3 Gb/s? Are there any other benefits of SATA 6 Gb/s vs 3 Gb/s ports?

    Read the article

  • SQL Server 2008 - Query takes forever to finish even though work is actually done

    - by Brian
    Running the following simple query in SSMS: UPDATE tblEntityAddress SET strPostCode= REPLACE(strPostCode,' ','') The update to the data (at least in memory) is complete in under a minute. I verified this by performing another query with transaction isolation level read uncommitted. The update query, however, continues to run for another 30 minutes. What is the issue here? Is this caused by a delay to write to disk? TIA

    Read the article

  • Jaxb to generate the XML directly to the OutputStream

    - by sonu
    Hi, I have a 500Mb csv file. I need to convert it into XML file. I am using the Jaxb to created the xml file. It is working fine for small amout of data. but for large amout of data like 300 mb it is throwing out of memory exception. Can anyone tell me that How can I create each element and write it into a file without creating the whole tree using the jaxb?" Thanks Sonu

    Read the article

  • fetchBatchSize to be same as fetchLimit

    - by user1730622
    What does it mean to have fetchBatchSize to be the same as fetchLimit, say both are set to be 5. My understanding is that, with the fetchLimit, then only 5 records will be in the fetch result set; and additionally with the fetchBatchSize, only the ids/identities of the records will be read to the memory, and then the full records won't be retrieved until they are accessed. Is that a correct understanding?

    Read the article

  • Can the .NET MethodInfo cache be cleared or disabled?

    - by Anton
    Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again. I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata. Is there any way to clear or disable this MemberInfo cache?

    Read the article

  • How can I get at (and delete) emails in the "Conflicts" folder in Outlook 2007

    - by Verbeia
    I use Outlook 2007 at work. It is syncd to a BlackBerry and an iPad. This works fine on the surface, but I have noticed a lot of emails build up in a Conflicts folder that I can't see. There is also a "Sync Issues" folder that I can't see, containing log files from sync issues. None of this should be a problem except that emails that were previously flagged but have since been deleted still turn up in the Conflicts folder, and thus still turn up in the To-Do bar. (They also occupy space on the server and count to my mailbox limit.) Is there a way to get at either the Conflicts folder itself, or construct a search that returns all the emails in that folder, so I can get rid of them? I can certainly delete them if I search by title or whatever, but it's annoying.

    Read the article

  • Input multiple file names in windows open file dialog box

    - by goodiet
    Windows 7 allows you to select multiple files to open at once by using ctrl or shift key. The "File Name" input field at the bottom of the dialog box would auto populate with the following sample: "aaa.txt" "bbb.txt" "ccc.txt" "ddd.txt" I have 14,000 files in a folder and I only need a range of files (approx 500). When I use the shift key to select a range of files, the "File Name" field auto populates all 500 file names. Windows would cut me off at the 260th character when I try to paste in a pre-generated string into the "File Name" field. Is there a way to bypass the 260 character limit so it would accept my entire string with 500 file names?

    Read the article

  • Password protected traffic meter

    - by UncleBob
    Hi first, I have a small problem for which I haven't found a solution yet. I live in Bosnia and share the Internet connection with the landlady, and as is usual in Bosnia, we do not have a flat rate, but a 15 Giga traffic limite. That would actually be more than enough, if the son of the landlady wouldn't be watching videos all the time, so the bills are truning out rather expensive. I have already installed a traffic monitoring program, but he apparently turns it off as soon as he comes close to his limit and then denies that he consumed any more. I therefore need at least a measurement program that is password protected and / or notes in the log when it's been turned off. Even better would be a program that just cuts his access when he exceeds his share, ie a mixture of Traffic meter and Parental Guard. Can someone help me out here?

    Read the article

  • Suhosin per-URL exceptions?

    - by STATUS_ACCESS_DENIED
    I am using SimpleID as my OpenID provider and it turns out that if I log on via pages like those on StackExchange, one of the parameters of the GET request gets dropped by Suhosin. The name of the variable is s and I presume it's responsible for the "return to URL" part after login. All of this is not a problem as long as I am already logged into SimpleID from before. However, as soon as the site on which I want to log in via OpenID ends up at the login screen of SimpleID, the redirect back to the site I came from does not work anymore due to the dropped variable. Is there a method to configure either on a per-virtual-host or per-URL basis to ignore the maximum length for GET requests with a parameter s exceeding the (globally) set limit? I'm using Apache 2.2, so I was wondering whether a mechanism similar to setting the PHP ini variables from within the server configuration exists for Suhosin.

    Read the article

  • Password History Storage and Variability Comparison

    - by z3ke
    I believe this situation would be similar to many others out there, so maybe some of you can shed some light... Supposedly, when making password changes through MS exchange every 90 days, you cannot use any simple variation of one of your old passwords, up to whatever limit the admin's set for a system. My question: If your previous passwords are only stored as hashes, how can they check for the "just changed one letter" case. Wouldn't they have to have access to the old plain-text passwords in order to make those comparisons? The only other thing I can think of is if upon original creation of a password, they also stored all other one character permutations of it, so that they can be banned later?

    Read the article

  • How to use length indicator in a C++ program

    - by cj
    I want to make a program in C++ that reads a file where each field will have a number before it that indicates how long it is. The problem is I read every record in object of a class; how do I make the attributes of the class dynamic? For example if the field is "john" it will read it in a 4 char array. I don't want to make an array of 1000 elements as minimum memory usage is very important.

    Read the article

  • Jail user to home directory while still allowing permission to create and delete files/folders

    - by Sevenupcan
    I'm trying to give a client SFTP access to the root directory of their site on my server (Ubuntu 10.10) so they can manager their website themselves. While I have been successful in jailing a user to a directory and giving them SFTP access; they are only allowed to create and delete new files in sub directories (the directories they own). This means that I must give them access to the parent directory to the root of their site. How can I limit them to the root of their site (for example public_html) while still allowing them the ability create and delete files. All the tutorials I have read suggest that the root must be the owner of the user's home directory, which prevents them from write access inside that directory. I'm relatively new to managing my own server so any advice would be very grateful. Many thanks.

    Read the article

  • C99 variable length automatic array performance

    - by aaa
    Is there significant cpu/memory overhead associated with using automatic arrays with g++/Intel on 64-bit x86 linux platform? int function(int N) { double array[N]; overhead compared to allocating array before hand (assuming function is called multiple times) overhead compared to using new overhead compared to using malloc The range of N may be from 1kb to 16kb roughly, stack overrun is not a problem.

    Read the article

  • I can't open a Word file because it's too large

    - by Jane
    I was creating a file with MS Word 2007 where I included a number of images. I didn't compress them as I was putting them into the file. I managed to save the file, but have not been able to reopen it ever since, as it says that I have exceeded the 32 MB limit. I am working on an old Macbook (OS X 10.4.11). I have tried to open the file in both OpenOffice and LibreOffice, but it just causes those programs to crash. Is there any way of reducing the file size without opening the document?

    Read the article

  • Visual Studio Solution: static or shared projects?

    - by goodrone
    When a whole project (solution) consists of multiple subprojects (.vcproj), what is a preferable way to tie them: as static libraries or as shared libraries? Assuming that those subprojects are not used elsewhere, the shared libraries approach shouldn't decrease memory usage or load time.

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • config.nt not exist in windows server 2008 64bit

    - by user1853266
    i have a server with high traffic , it's about 20K request/sec at peak time and 5~10k request/sec at normal time but i have a serious problem i install nginx in windows server 2008 standard edition 64bit , but i get this error massage 2012/11/26 05:29:23 [error] 2496#2004: *3976 maximum number of descriptors supported by select() is 1024 while reading client request line, client: X.X.X.X, server: 0.0.0.0:8080 when i search , i found this problem is about dos application file handle limit , and i can be changed it on c:\windir\system32\Config.nt but config.nt not exist i also hear , in 64Bit os version , this file not exist so how can i change file handle in windows server 2008 - 64Bit ?

    Read the article

  • If you stick to standard coding in .NET, is there reason to manually invoke the GC or run finalizers

    - by Matt
    If you stick to managed code and standard coding (nothing that does unconventional things withe CLR) in .NET, is there any reason to manually invoke the GC or request to run finalizers on unreferenced objects? The reason I ask is thaty I have an app that grows huge in Working Memory set. I'm wondering if calling System.GC.Collect(); and System.GC.RunFinalizers(); would help, and if it would force anything that wouldn't be done by the CLR normally anyways.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >