Search Results

Search found 12267 results on 491 pages for 'memory'.

Page 366/491 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How does multiple implementing multiple COM interfaces work in C++?

    - by Martin
    I am trying to understand this example code regarding Browser Helper Objects. Inside, the author implements a single class which exposes multiple interfaces (IObjectWithSite, IDispatch). His QueryInterface function performs the following: if(riid == IID_IUnknown) *ppv = static_cast<BHO*>(this); else if(riid == IID_IObjectWithSite) *ppv = static_cast<IObjectWithSite*>(this); else if (riid == IID_IDispatch) *ppv = static_cast<IDispatch*>(this); I have learned that from a C perspective, interface pointers are just pointers to VTables. So I take it to mean that C++ is capable of returning the VTable of any implemented interface using static_cast. Does this mean that a class constructed in this way has a bunch of VTables in memory (IObjectWithSite, IDispatch, etc)? What does C++ do with the name collisions on the different interfaces (they each have a QueryInterface, AddRef and Release function), can I implement different methods for each of these?

    Read the article

  • addObjectsFromArray

    - by derek.lo
    I have a question about memory management when using the addObjectsFromArray method. Basically, I have 2 arrays defined in the appDelegate. I need these 2 arrays for the duration of my application's runtime. I therefore release them in my appDelegate's dealloc method. When I go to use these two arrays in a class, I want one array to store the values from the other, so that the other can have it's contents removed, but still stick around for use. Something like this: [appDelegate.arrayTwo addObjectsFromArray:appDelegate.arrayOne]; [appDelegate.arrayOne removeAllObjects]; I'm getting the compiler error: EXC_BAD_ACCESS because of a pointer issue? retaining issue? Any help would be greatly appreciated! Thanks

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • What interprocess locking calls should I monitor?

    - by Matt Joiner
    I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock. While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for. Currently my only suspect is futex() which comes up very early on in the process' execution. Update0 There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc. Update1 I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.

    Read the article

  • PHP: how to stop ignore_user_abort, is it a good solution for long run program

    - by user192344
    let say i have send email program which need to run arround 7 hours. but i cant open the browser for 7 hours beside cronjob, ignore_user_abort() will it be a solution? will the script stop when all email has sent and the program has finish the loop? or it will keep eating the server memory? some people said u may need to add some output at the end of the program to avoid the program run forever? and some people also said echo a litte bit string will not stop the script, but has to use ob_flush, any example for this?

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • Forbid GridView to load all views at once

    - by efpies
    I have about 700 items to display in the grid view. On a Samsung Galaxy Tab 10.1 this is not a problem: it has enough memory. On a HTC Explorer the heap is overflown. So I want to load data dynamically regarding current scroll position (N rows for screen + 5 rows as a tail). And I want to show a scrollbar which represents the position in total rows. But I don't want to draw items that I don't see. In other words, I want to create something similar to UITableView in iOS. How can I do this?

    Read the article

  • scala REPL is slow on vista

    - by Jacques René Mesrine
    I installed scala-2.8.0.RC3 by extracting the tgz file into my cygwin (vista) home directory. I made sure to set $PATH to scala-2.8.0.RC3/bin. I start the REPL by typing: $ scala Welcome to Scala version 2.8.0.RC3 (Java HotSpot(TM) Client VM, Java 1.6.0_20). Type in expressions to have them evaluated. Type :help for more information. scala> Now when I tried to enter an expression scala> 1 + 'a' the cursor hangs there without any response. Granted that I have chrome open with a million tabs and VLC playing in the background, but CPU utilization was 12% and virtual memory was about 75% utilized. What's going on ? Do I have to set the CLASSPATH or perform other steps.

    Read the article

  • Change connection on the fly

    - by aron
    Hello, I have a SQL server with 50 databases. Each one has the exact same schema. I used the awesome Subsonic 2.2 create the DAL based on one of them. I need to loop though a list of database names and connect to each one and perform an update one at a time. If there a way to alter how subsonic uses the connection string. I believe I would need to store the connection string in memory that way it can keep changing. Is this possible? I tried doing a ConfigurationManager.ConnectionStrings["theConnStrName"].ConnectionString = updated-connection-string-here; .. but that did not work thanks!

    Read the article

  • Are there any Python reference counting/garbage collection gotchas when dealing with C code?

    - by Jason Baker
    Just for the sheer heck of it, I've decided to create a Scheme binding to libpython so you can embed Python in Scheme programs. I'm already able to call into Python's C API, but I haven't really thought about memory management. The way mzscheme's FFI works is that I can call a function, and if that function returns a pointer to a PyObject, then I can have it automatically increment the reference count. Then, I can register a finalizer that will decrement the reference count when the Scheme object gets garbage collected. I've looked at the documentation for reference counting, and don't see any problems with this at first glance (although it may be sub-optimal in some cases). Are there any gotchas I'm missing? Also, I'm having trouble making heads or tails of the cyclic garbage collector documentation. What things will I need to bear in mind here? In particular, how do I make Python aware that I have a reference to something so it doesn't collect it while I'm still using it?

    Read the article

  • Paging in ActiveRecord

    - by Alex
    Does CastleProject ActiveRecord support paging? I need to load only data which is now seen on the screen. If I use [HasMany], it will be loaded as a whole either immediately or at the first call (if lazy attribute is true). However I only need something like first 100 records (then maybe 100 next records). Another question is how to load only 100 items. If the collection is too big, memory can reach its limit if we constantly load more and more items.

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Changes to the User Permissions Not saving

    - by anru
    I am use drupal 6. it seems like permission page can not save too many settings. I have try to save permission setting, but it is just not saved into DB. I have found out this is due to "too many fields". (use content permission module). if i uncheck some fields, and then checking lesser fields, permission will be saved. for example, if I am unchecking 2 check boxes, then checking one check box, permission will be saved. does any one know which function the permission page used to insert result into db? my php memory limit is 256M.

    Read the article

  • WCF high instance count: anyone knows negative sideffects?

    - by Alex
    Hi there! Did anyone experience or know of negative side effects from having a high service instance count like 60k? Aside from the memory consumption of course. I am planning to increase the threshold for the maximum allowed instance count in our production environments. I am basically sick of severe production incidents just because "something" forgot to close a proxy properly. I plan to go to something like 60k instances which will allow the service to survive using default session timeouts at a call rate average for our clients. Thanks, Alex

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • vector<vector<largeObject>> vs. vector<vector<largeObject>*> in c++

    - by Leif Andersen
    Obviously it will vary depending on the compiler you use, but I'm curious as to the performance issues when doing vector<vector<largeObject>> vs. vector<vector<largeObject>*>, especially in c++. In specific: let's say that you have the outer vector full, and you want to start inserting elements into first inner vector. How will that be stored in memory if the outer vector is just storing pointers, as apposed to storing the whole inner vector. Will the whole outer vector have to be moved to gain more space, or will the inner vector be moved (assuming that space wasn't pre-allocated), causing problems with the outer vector? Thank you

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • #define vs enum in an embedded environment (How do they compile?)

    - by Alexander Kondratskiy
    This question has been done to death, and I would agree that enums are the way to go. However, I am curious as to how enums compile in the final code- #defines are just string replacements, but do enums add anything to the compiled binary? Or are they both equivalent at that stage. When writing firmware and memory is very limited, is there any advantage, no matter how small, to using #defines? Thanks! EDIT: As requested by the comment below, by embedded, I mean a digital camera. Thanks for the answers! I am all for enums!

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • Clone LINQ To SQL object Extension Method throws object dispose exception....

    - by gtas
    Hello all, I have this extension method for cloning my LINQ To SQL objects: public static T CloneObjectGraph<T>(this T obj) where T : class { var serializer = new DataContractSerializer(typeof(T), null, int.MaxValue, false, true, null); using (var ms = new System.IO.MemoryStream()) { serializer.WriteObject(ms, obj); ms.Position = 0; return (T)serializer.ReadObject(ms); } } But while i carry objects with not all references loaded, while qyuerying with DataLoadOptions, sometimes it throws the object disposed exception, but thing is I don't ask for references that is not loaded (null). e.g. I have Customer with many references and i just need to carry on memory the Address reference EntityRef< and i don't Load anything else. But while i clone the object this exception forces me to load all the EntitySet< references with the Customer object, which might be too much and slow down the application speed. Any suggestions?

    Read the article

  • How to solve the performance decay of a VB.NET 1.1 application?

    - by marco.ragogna
    I have single-thread windows form application written with VB.NET and targeting Framework 1.1. The software communicates with external boards through a serial interface, and it mainly consist of a state machine that run some tests, driven in a loop done with a Timer and an Interval of 50ms. The feedback on the user interface is done through some custom events raised during the tests. The problem that is driving me crazy is that the performance slightly decrease over time, and in particular after 1200/1300 test operations. The memory occupied does not increase over time, it is only the CPU that seems interested by this problem. The strange thing is that, targeting framework 2.0 and using the same identical code, I do not have this problem. I know that is difficult without looking at the code, but do you have suggestions how can I approach the problem?

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    I posted this on SuperUser...but I was hoping the pros here at SO might have a good idea about how to fix this as well.... Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): * couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • Application crashing in the iPad but working fine in iPad simulator.

    - by srikanth rongali
    Hi, I am writing a game in cocos2d. In the iPad Simulator the application is running good. While I am running the application in the iPad. But it was crashing by giving the following message in terminal. I am using 2048x2048 CCSpriteSheets in my code. I used instruments tool there is sudden increase in memory to 32MB before crashing. It is crashing at CCSpriteFrameCache . Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-6258-64 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none continue The program is not being run. The program is not being run. Thank you.

    Read the article

  • Pinterest Gridview implementation on iOS

    - by Amal
    I want to implement a grid view like the one in Pinterest I thought about implementing as 3 table views. But I was not able to scroll them together well. When I implemented the scrollViewDidScroll and set the contentOffset for the table views other the scrollView , the scrolling became slow and unusable. Another implementation I did was of was having a set of images to load and calling the viewDraw function in scrollViewDidScroll. THe ViewDraw function just draws the necessary images and removes the rest of the images from the memory which were already drawn but wont be visible . this too makes the ScrollView Scrolling slow. And another issue with it is that there are white(background color) patches before the images are drawn. What should be the best way to implement this grid view ?

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >