Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 235/555 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • What is the best way to implement an object cache with Entity Framework?

    - by Harshal
    Say I have a table of "BlogPosts" in a database and i want to be able to cache the ones that were retrieved already in memory, for further reads, I can just use a standard hashtable type memory cache like System.Web.Caching.Cache, but if i then need to update a property on one of these blog posts e.g. blogPost.Title and update the record in DB, i cannot do this without fetching it again from database as the Entity Framework context used to fetch this record when it was loaded into my cache is already disposed? How do I write code so that I am getting an object from my cache, updating one property and just calling the SaveChanges method without incurring an extra read.

    Read the article

  • Java Heap Overflow, Forcing Garbage Collection

    - by Nicholas
    I've create a trie tree with an array of children. When deleting a word, I set the children null, which I would assume deletes the node(delete is a relative term). I know that null doesn't delete the child, just sets it to null, which when using a large amount of words it causes to overflow the heap. Running a top on linux, I can see my memory usage spike to 1gb pretty quickly, but if I force garbage collection after the delete (Runtime.gc()) the memory usage goes to 50mb and never above that. From what I'm told, java by default runs garbage collection before a heap overflow happens, but I can't see to make that happen.

    Read the article

  • How can I do daily backups for my VisualSVN Repos?

    - by Tyler
    How can I do daily backups for my VisualSVN Repos? Its on a Windows Server 2003 machine with VisualSVN Server, I was thinking about just doing an xcopy of the folder C:\Repo but I'm not familiar enough with svn to know if that will cause issues. Should I use dump or hotcopy or both?

    Read the article

  • Windows Service Limit Crashes Services on Startup

    - by Paul Williams
    We have developed a custom Windows service in C# as part of a large Enterprise application. Our QA department tests multiple versions of this service. The QA lab has several (over 20) copies of this service installed on one Windows 2003 test box. Each copy is in its own folder and has a unique service name, though each executable file is named the same (OurWindowsService.exe, for example). Each service uses the same Windows credentials (a domain user). The purpose of this service is to handle MSMQ messages. The queued messages do all sorts of important stuff. For some reason, they can run only 5 of these services at a time. When we start a 6th, the service crashes on startup. For example, I can start #1, #2, #3, #4, and #5. When I start #6, it crashes. However, if I stop #1 and start #6, #6 runs fine, and now #1 fails to start. When the services crash, the following error appears in the Windows event log: Faulting application OurWindowsService.exe, version 5.40.1.1, faulting module kernel32.dll, version 5.2.3790.4480, fault address 0x0000bef7. I was able to use WinDbg to generate a postmortem dump file. The dump file revealed that the crash occurs trying to delay load SHLWAPI.dll: 0:000> kb100 ChildEBP RetAddr Args to Child 0012ece4 79037966 c06d007e 00000000 00000001 KERNEL32!RaiseException+0x53 0012ed4c 790099ba 00000008 0012ed08 7c82860c mscoree!__delayLoadHelper2+0x139 0012ed98 790075b1 001550c8 0012edac 0012fb34 mscoree!_tailMerge_**SHLWAPI_dll**+0xd 0012edb0 79007623 001550c8 0012edf8 0012edf4 mscoree!XMLGetVersionWithSupported+0x22 0012ee00 790069a4 aa06f1b0 00000000 000001fe mscoree!RuntimeRequest::GetRuntimeVersion+0x56 0012f478 790077aa 00000001 7903fb4c 0012fb34 mscoree!RuntimeRequest::ComputeVersionString+0x5bd 0012f89c 79007802 00000001 0012f8b4 7903fb4c mscoree!RuntimeRequest::FindVersionedRuntime+0x11c 0012f8b8 79007b19 00000001 00000000 aa06fa6c mscoree!RuntimeRequest::RequestRuntimeDll+0x2c 0012ffa4 79007c02 00000001 0012ffbc 00000000 mscoree!GetInstallation+0x72 0012ffc0 77e6f23b 00000000 00000000 7ffdf000 mscoree!_CorExeMain+0x12 0012fff0 00000000 79007bf0 00000000 78746341 KERNEL32!BaseProcessStart+0x23 I believe the error code handed to Kernel32.RaiseException, c06d007e, means Module Not Found, but I'm not certain. Does this sound familiar to anyone? Are we hitting some limit on the number of service instances on some file name? Does MSMQ dislike more than 5 listening services?

    Read the article

  • ASP.NET how to get Cache in KB used for this application ?

    - by eugeneK
    I need to know what is Cache size. I've read on this site solution for close problem but it doesn't solves mine. As i know i can get values from PerMon, here is function public static string getCacheSize() { PerformanceCounter pc = new PerformanceCounter("ASP.NET Applications", "Cache % Machine Memory Limit Used","__TOTAL__", true); return string.Format("{0:0.00}%", pc.NextValue()); } 1.it gives me percents when i need KB and there is no item closest to this one in PerfMon 2.it shows 70.5% used while all memory usage is about 50% any help ?

    Read the article

  • How do I use Eval() to reference values in a SortedDictionary in an asp Repeater?

    - by MatthewMartin
    I thought I was clever to switch from the memory intensive DataView to SortedDictionary as a memory efficient sortable data structure. Now I have no idea how get the key and value out of the datasource in the <%# or Eval() expressions. SortedDictionary<int, string> data = RetrieveNames(); rCurrentTeam.DataSource = data; rCurrentTeam.DataBind(); <asp:Repeater ID="rNames" runat="server"> <ItemTemplate> <asp:Label ID="lblName" runat="server" Text='<%# Eval("what?") %>' /> </ItemTemplate> </asp:Repeater> Any suggestions?

    Read the article

  • Node.js for lua?

    - by Shahbaz
    I've been playing around with node.js (nodejs) for the past few day and it is fantastic. As far as I can tell, lua doesn't have a similar integration of libev and libio which let's one avoid almost any blocking calls and interact with the network and the filesystem in an asynchronous manner. I'm slowly porting my java implementation to nodejs, but I'm shocked that luajit is much faster than v8 JavaScript AND uses far less memory! I imagine writing my server in such an environment (very fast and responsive, very low memory usage, very expressive) will improve my project immensly. Being new to lua, I'm just not sure if such a thing exists. I'll appreciate any pointers. Thanks

    Read the article

  • string manipulation without alloc mem in c

    - by Mike
    I'm wondering if there is another way of getting a sub string without allocating memory. To be more specific, I have a string as: const char *str = "9|0\" 940 Hello"; Currently I'm getting the 940, which is the sub-string I want as, char *a = strstr(str,"9|0\" "); char *b = substr(a+5, 0, 3); // gives me the 940 Where substr is my sub string procedure. The thing is that I don't want to allocate memory for this by calling the sub string procedure. Is there a much easier way?, perhaps by doing some string manipulation and not alloc mem. I'll appreciate any feedback.

    Read the article

  • Strange behavior with gcc inline assembly

    - by Chris
    When inlining assembly in gcc, I find myself regularly having to add empty asm blocks in order to keep variables alive in earlier blocks, for example: asm("rcr $1,%[borrow];" "movq 0(%[b_],%[i],8),%%rax;" "adcq %%rax,0(%[r_top],%[i],8);" "rcl $1,%[borrow];" : [borrow]"+r"(borrow) : [i]"r"(i),[b_]"r"(b_.data),[r_top]"r"(r_top.data) : "%rax","%rdx"); asm("" : : "r"(borrow) : ); // work-around to keep borrow alive ... Another example of weirdness is that the code below works great without optimizations, but with -O3 it seg-faults: ulong carry = 0,hi = 0,qh = s.data[1],ql = s.data[0]; asm("movq 0(%[b]),%%rax;" "mulq %[ql];" "movq %%rax,0(%[sb]);" "movq %%rdx,%[hi];" : [hi]"=r"(hi) : [ql]"r"(ql),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); for (long i = 1; i < b.size; i++) { asm("movq 0(%[b],%[i],8),%%rax;" "mulq %[ql];" "xorq %%r10,%%r10;" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],%[hi];" "movq %%r10,%[carry];" : [carry]"+r"(carry),[hi]"+r"(hi) : [i]"r"(i),[ql]"r"(ql),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","%r10","memory"); } asm("movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],8(%[sb],%[i],8);" : [hi]"+r"(hi),[carry]"+r"(carry) : [i]"r"(long(b.size)),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); I think it has to do with the fact that it's using so many registers. Is there something I'm missing here or is the register allocation just really buggy with gcc inline assembly?

    Read the article

  • Size of Objects in Java Heap w/ Regards to Methods

    - by Eric
    I know about primitives and objects living on the heap, but how does the number of methods effect heap size of the object? For example: public class A { int x; public getX() { return x; } } public class B { int x; public getX() { return x; } public getXString() { return String.valueOf(x); } public doMoreInterestingStuff() { return x * 42; } //etc } When instantiated, both objects live on the heap, both have memory allocated to their primitive x, but is B allocated more heap space due to having more method signatures? Or are those ONLY on the classLoader? In this example its trivial, but when there are 100,000+ of these objects in memory at any given time I imagine it could add up.

    Read the article

  • How to insert HTML into a UIWebView

    - by Mohan Gulati
    I have HTML Content which was being displayed in a text. The next iteration of my app is display the HTML contents into a UIWebView. So I basically replaced my UITextView with UIWebView. However I could not figure out how to inset my HTML snippit into the view. It seems to need a URLRequest which I do not want. I have already stored the HTML content in memory and want to load and display it from memory. Any ideas how I should proceed?

    Read the article

  • Do I need to force the GAC to reload an assembly? Is this possible?

    - by Ben McCormack
    I've added types to my .NET classes that I'm using for COM interop. To get it to work with my VB6 application, I unregistered the DLL and re-registered it (using regasm). I then uninstalled and reinstalled it to the GAC (using gacutil). The types are showing up in the VB6 object explorer, but when I run the application in the VB6 IDE, it breaks on the line that instantiates the new types with the error: Automation Errror - The System cannot find the file specified. I thought this odd since I had already updated the GAC, so I uninstalled the dll from the GAC and got the exact same error, which seems to indicate that the older version of the dll is already in memory and needs to be "reloaded" so that the newer DLL is in memory. Is this possible, and if so, what do I need to do?

    Read the article

  • If terminating a hung thread is a good idea, how do I do it safely?

    - by Steve
    My Delphi program relies heavily on Outlook automation. Outlook versions prior to 2007-SP2 tend to get stuck in memory due to badly written addins and badly written Outlook code. If Outlook is stuck, calling CreateOleObject('Outlook.Application') or GetActiveObject ... doesn't return and keeps my application hanging till Outlook.exe is closed in the task manager. I've thought of a solution, but I'm unsure whether it's good practice or not. I'd start Outlook with CreateOleObject in a separate thread, wait 10 seconds in my main thread and if Outlook hangs (CreateOleObject doesn't return), offer the user to kill the Outlook.exe process from my program. But since I don't want to force the user to kill the Outlook.exe process, as an alternative I also need a way to kill the new thread in my program which keeps hanging now. Is this good practice? How can I terminate a hanging thread in Delphi without leaking memory?

    Read the article

  • Flashing and closing PyS60 SIS

    - by Matrich
    I created a py script for S60 2nd Edition FP3 with a background image, database, sound files which are stored in a folder on c drive. After several hours, I have managed to convert it into sis which I sent to my mobile phone. First, I installed the sis on the external memory and it failed to work. I read online that it should be on the same drive as python runtime. So I installed it on phone memory but it just flashes and then closes. I made another sis with a database and it worked so I suspect it could be because it cant find the background image and sound files in the folder. How can I know the cause of this? Also, how can I include the background image and sound files with the sis so that they are automatically installed when installing the sis file?

    Read the article

  • How does jQuery store data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • How to specify execution time of x86 and PowerPC instructions?

    - by Goofy
    Hello! I have to approximate execution time of PowerPC and x86 assembler code.I understand that I cannot compute exact it dependson many problems (current processor state - x86 processor dicides internal instructions in microinstructions, memory access time obtainig code from cache of from slower memory etc.). I found some information in Intel Optimizaction reference (APPENDIX C), but it does not provide information about all general purpose instructions. Is there any complete reference about it? What about PowerPC processors? Where can I fund such information?

    Read the article

  • RECOVER A DELETED FILE WWW in ubuntu 9.04

    - by Al Mubarak
    Hai., i'm using unbuntu 9.04 and i had all web developing dump has been installed in www folder., today morning unfortunately i delete the www folder via terminal. but i'm afraid about that issue., i dont have any knowledge for how to restore the www folder and included files asap. IF anyone Could known the issue and how to rectify that issue., pls let me know., Thnaks in Advance.,

    Read the article

  • pushScene and popScene or replaceScene . which should we use and when ?

    - by srikanth rongali
    I am using push scene to get the next scene. But, I read that for each PushScene the scene is stored in stack. The memory usage is more. So, I am using the replaceScene in place of pushScene. But, with replace scene I am getting the memory-bad-access message in debugger. So, I want to popScene after using it, so that the retain count is zero. But, I am confused in using popScene. If I have a Scene1 and Scene2. I used the following to go in to Scene2. Now I need to remove Scene1 from stack. [[CCDirector sharedDirector] pushScene:Scene2]; Where should I write the popScene to popScene1. How to get the previous scene in current running scene ? Thank you/

    Read the article

  • SDL_BlitSurface() not displaying image?

    - by Christian Gonzalez
    So I'm trying to display a simply image with the SDL library, but when I use the function SDL_BlitSurface() nothing happens, and all I get is a black screen. I should also note that I have the .bmp file, the source, and the executable file all in the same directory. //SDL Header #include "SDL/SDL.h" int main(int argc, char* args[]) { //Starts SDL SDL_Init(SDL_INIT_EVERYTHING); //SDL Surfaces are images that are going to be displayed. SDL_Surface* Hello = NULL; SDL_Surface* Screen = NULL; //Sets the size of the window (Length, Height, Color(bits), Sets the Surface in Software Memory) Screen = SDL_SetVideoMode(640, 480, 32, SDL_SWSURFACE); //Loads a .bmp image Hello = SDL_LoadBMP("Hello.bmp"); //Applies the loaded image to the screen SDL_BlitSurface(Hello, NULL, Screen, NULL); //Update Screen SDL_Flip(Screen); //Pause SDL_Delay(2000); //Deletes the loaded image from memory SDL_FreeSurface(Hello); //Quits SDL SDL_Quit(); return 0; }

    Read the article

  • Cleaning up a dynamic array of Objects in C++

    - by Dr. Monkey
    I'm a bit confused about handling an array of objects in C++, as I can't seem to find information about how they are passed around (reference or value) and how they are stored in an array. I would expect an array of objects to be an array of pointers to that object type, but I haven't found this written anywhere. Would they be pointers, or would the objects themselves be laid out in memory in an array? In the example below, a custom class myClass holds a string (would this make it of variable size, or does the string object hold a pointer to a string and therefore take up a consistent amount of space. I try to create a dynamic array of myClass objects within a myContainer. In the myContainer.addObject() method I attempt to make a bigger array, copy all the objects into it along with a new object, then delete the old one. I'm not at all confident that I'm cleaning up my memory properly with my destructors - what improvements could I make in this area? class myClass { private string myName; public unsigned short myAmount; myClass(string name, unsigned short amount) { myName = name; myAmount = amount; } //Do I need a destructor here? I don't think so because I don't do any // dynamic memory allocation within this class } class myContainer { int numObjects; myClass * myObjects; myContainer() { numObjects = 0; } ~myContainer() { //Is this sufficient? //Or do I need to iterate through myObjects and delete each // individually? delete [] myObjects; } void addObject(string name, unsigned short amount) { myClass newObject = new myClass(name, amount); myClass * tempObjects; tempObjects = new myClass[numObjects+1]; for (int i=0; i<numObjects; i++) tempObjects[i] = myObjects[i]); tempObjects[numObjects] = newObject; numObjects++; delete newObject; //Will this delete all my objects? I think it won't. //I'm just trying to delete the old array, and have the new array hold // all the objects plus the new object. delete [] myObjects; myObjects = tempObjects; } }

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • internet explorer, google chrome injection

    - by Volim Te
    I wrote code that injects a function in Internet Explorer/Chrome but it doesn't work with these processes. Basically, it fills one big structure with all the APIs my function needs, strings, and other data, then it opens a process to get a handle, virtualallocex to allocate enough memory to store a function and structure there, and it writes the function and the structure in allocated memory. It then runs createremotethread there with the function as a starting address and structure as parameter. It works all great with calc/notepad/winamp processes but I have problems with browser injection. I'm wondering what could it be, I'm using these APIs. x.xCreateFile x.xWriteFile x.xCloseHandle x.xSleep x.xVirtualAlloc x.xVirtualFree x.xMessageBox x.xLoadLibrary x.xShellExecute Is it because browsers are protected now and they're running with lowest privileges?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >