Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 425/555 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Change connection on the fly

    - by aron
    Hello, I have a SQL server with 50 databases. Each one has the exact same schema. I used the awesome Subsonic 2.2 create the DAL based on one of them. I need to loop though a list of database names and connect to each one and perform an update one at a time. If there a way to alter how subsonic uses the connection string. I believe I would need to store the connection string in memory that way it can keep changing. Is this possible? I tried doing a ConfigurationManager.ConnectionStrings["theConnStrName"].ConnectionString = updated-connection-string-here; .. but that did not work thanks!

    Read the article

  • What is a .NET managed module?

    - by Abhijeet Patel
    I know it's a Windows PE32, but I also know that the unit of deployment in .NET is an assembly which in turn has a manifest and can be made up of multiple managed modules. My questions are : 1) How would you create multiple managed modules when building a project such as a class lib or a console app etc. 2) Is there a way to specify this to the compiler(via the project properties for example) to partition your source code files into multiple managed modules. If so what is the benefit of doing so? 3)Can managed modules span assemblies? 4)Are separate file created on disk when the source code is compiled or are these created in memory and directly embedded in an assembly?

    Read the article

  • clear html webpage?

    - by noname
    i am currently developing a crawler that crawls all links on the web and displays them in the web browser (and saving it of course). but after some hours there will be a huge list displayed on the web browser and i want to only display lets say 1000 links at the same time. then i clear the html and display another 1000 links. this is also good for the RAM or it will eat up all memory. how do i clear the web browser screen? EDIT: i have seen some scripts using some flush buffer functions. has this anything to do with my case?

    Read the article

  • Forbid GridView to load all views at once

    - by efpies
    I have about 700 items to display in the grid view. On a Samsung Galaxy Tab 10.1 this is not a problem: it has enough memory. On a HTC Explorer the heap is overflown. So I want to load data dynamically regarding current scroll position (N rows for screen + 5 rows as a tail). And I want to show a scrollbar which represents the position in total rows. But I don't want to draw items that I don't see. In other words, I want to create something similar to UITableView in iOS. How can I do this?

    Read the article

  • Pinterest Gridview implementation on iOS

    - by Amal
    I want to implement a grid view like the one in Pinterest I thought about implementing as 3 table views. But I was not able to scroll them together well. When I implemented the scrollViewDidScroll and set the contentOffset for the table views other the scrollView , the scrolling became slow and unusable. Another implementation I did was of was having a set of images to load and calling the viewDraw function in scrollViewDidScroll. THe ViewDraw function just draws the necessary images and removes the rest of the images from the memory which were already drawn but wont be visible . this too makes the ScrollView Scrolling slow. And another issue with it is that there are white(background color) patches before the images are drawn. What should be the best way to implement this grid view ?

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    I posted this on SuperUser...but I was hoping the pros here at SO might have a good idea about how to fix this as well.... Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): * couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • Why should I reuse XmlHttpRequest objects?

    - by Xavi
    From what I understand, it's a best practice to reuse XmlHttpRequest objects whenever possible. Unfortunately, I'm having a hard time understanding why. It seems like trying to reuse XHR objects can increase code complexity, introduce possible browser incompatibilities, and lead to other subtle bugs. After researching this question for a while, I did come up with a list of possible explanations: Fewer objects created means less garbage collecting Reusing XHR objects reduces the chance of memory leaks The overhead of creating a new XHR object is high The browser is able to perform some sort of network optimization under hood But I'm not sure if any of these reasons are actually valid. Any light you can shed on this question would be much appreciated.

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • Maven2 problem to access plugins repository !

    - by achraf
    Hi, i just come to install maven2, after configuring the settings.xml in ${user.home}/.m2 and fixing a proxy error. Now by executing the command : mvn -U archetype:create -DgroupId=maven-test -DartifactId=maven-test -DpackageName=net.ensode.maventest i get this error : [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] org.apache.maven.plugins: checking for updates from central [WARNING] repository metadata for: 'org.apache.maven.plugins' could not be retri eved from repository: central due to an error: Authorization failed: Access deni ed to: http://repo2.maven.org/maven2/org/apache/maven/plugins/maven-metadata.xml [INFO] Repository 'central' will be blacklisted [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] The plugin 'org.apache.maven.plugins:maven-archetype-plugin' does not exi st or no valid version could be found [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: < 1 second [INFO] Finished at: Thu Apr 29 16:05:40 CEST 2010 [INFO] Final Memory: 2M/247M [INFO] ------------------------------------------------------------------------ any idea of what could cause it ? thanks !

    Read the article

  • Program seems to get stuck ?

    - by Frank
    I have a Java program, I did some change and the strange thing is no matter what I do now, it will output the same result before my change, I looked into my system (Windows Vista), and clicked on "Generate a system health report", the result says : Symptom : A service is reported as having an unexpected error code Cause : One or more services has failed. The service did not stop gracefully, suggesting the service may have crashed or one of its components stopped in an unsupported way. Details : Service exited with code not equal to 0 or 1077 Resolution : Restart the service It doesn't say which service, and I defragmented my drives a few times, it still gave me the same report. I also checked my memory, it says no problem with ram. Everything else is ok, I re-started the windows a few times, got more free space on my C: drive, yet the Java program is still not responding to my new changes, unless I intentionally make a mistake, it gives me compile error, but if it can compile successfully, it still gives me the results before the change. What can I do to fix this ? Frank

    Read the article

  • Paging in ActiveRecord

    - by Alex
    Does CastleProject ActiveRecord support paging? I need to load only data which is now seen on the screen. If I use [HasMany], it will be loaded as a whole either immediately or at the first call (if lazy attribute is true). However I only need something like first 100 records (then maybe 100 next records). Another question is how to load only 100 items. If the collection is too big, memory can reach its limit if we constantly load more and more items.

    Read the article

  • Good reasons why to not use XIB files?

    - by mystify
    Are there any good reasons why I should not use XIB / NIB files with an highly customized UI and extensive animations and super low memory footprint needs? As a beginner I started with XIB. Then I figured out I couldn't do just about everything in them. It started to get really hard to customize things the way I wanted them to be. So at the end, I threw all my XIBs away and did it all programmatically. So when someone asks me if XIB is good, I generally say: Yeah, if you want to make crappy boring interfaces and don't care too much about performance, go ahead. But what else could be a reason not to use XIB? Am I the only iPhone developer who prefers doing everything programmatically for this reasons?

    Read the article

  • Application crashing in the iPad but working fine in iPad simulator.

    - by srikanth rongali
    Hi, I am writing a game in cocos2d. In the iPad Simulator the application is running good. While I am running the application in the iPad. But it was crashing by giving the following message in terminal. I am using 2048x2048 CCSpriteSheets in my code. I used instruments tool there is sudden increase in memory to 32MB before crashing. It is crashing at CCSpriteFrameCache . Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-6258-64 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none continue The program is not being run. The program is not being run. Thank you.

    Read the article

  • CPU/GPU Monitering. (Tempature, current speed, etc) in C#

    - by Tommy
    Im looking for a way to moniter system statistics, here are my main points of interest: CPU Tempature CPU speed, (Cycles per second) CPU Load (Idle percent) GPU Tempature Some other points of interest: Memory useage Network Load (Traffic Up/Down) My ultimate goal is to write an application that can be used for easily running in the backround, and allow setting many events fr certain actions, for instance When processer temp gets to 56C -> Do _Blank_ etc. So this leaves me two main points. Is there a framework already out there for this sort of thing? If No to #1, How can i go about doing this?

    Read the article

  • Why is autorelease especially dangerous/expensive for iPhone applications?

    - by e.James
    I'm looking for a primary source (or a really good explanation) to back up the claim that the use of autorelease is dangerous or overly expensive when writing software for the iPhone. Several developers make this claim, and I have even heard that Apple does not recommend it, but I have not been able to turn up any concrete sources to back it up. SO references: autorelease-iphone Why does this create a memory leak (iPhone)? Note: I can see, from a conceptual point of view, that autorelease is slightly more expensive than a simple call to release, but I don't think that small penalty is enough to make Apple recommend against it. What's the real story?

    Read the article

  • vector<vector<largeObject>> vs. vector<vector<largeObject>*> in c++

    - by Leif Andersen
    Obviously it will vary depending on the compiler you use, but I'm curious as to the performance issues when doing vector<vector<largeObject>> vs. vector<vector<largeObject>*>, especially in c++. In specific: let's say that you have the outer vector full, and you want to start inserting elements into first inner vector. How will that be stored in memory if the outer vector is just storing pointers, as apposed to storing the whole inner vector. Will the whole outer vector have to be moved to gain more space, or will the inner vector be moved (assuming that space wasn't pre-allocated), causing problems with the outer vector? Thank you

    Read the article

  • Browser back button broken between hidden div's

    - by Linda
    First of all, these pages will never be on the web but will be in internal memory. They are a group of linked documents---an ebook. http://www.anmldr.com/testdivs When I click on the link in the first div, the second div becomes visible and the first div is hidden. The problem is with the browser's back button. If you then click on the back button, the URL updates but the first div does not show again. How can I correct the back button so that the first div shows? The link from the second div to the first div works fine but it is the browser back button that I do not know how to work with. Thanks, Linda P.S. These are using CSS3 so it is better to use a WebKit based browser.

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • [Python] How can I speed up unpickling large objects if I have plenty of RAM?

    - by conradlee
    It's taking me up to an hour to read a 1-gigabyte NetworkX graph data structure using cPickle (its 1-GB when stored on disk as a binary pickle file). Note that the file quickly loads into memory. In other words, if I run: import cPickle as pickle f = open("bigNetworkXGraph.pickle","rb") binary_data = f.read() # This part doesn't take long graph = pickle.loads(binary_data) # This takes ages How can I speed this last operation up? Note that I have tried pickling the data both in using both binary protocols (1 and 2), and it doesn't seem to make much difference which protocol I use. Also note that although I am using the "loads" (meaning "load string") function above, it is loading binary data, not ascii-data. I have 128gb of RAM on the system I'm using, so I'm hoping that somebody will tell me how to increase some read buffer buried in the pickle implementation.

    Read the article

  • #define vs enum in an embedded environment (How do they compile?)

    - by Alexander Kondratskiy
    This question has been done to death, and I would agree that enums are the way to go. However, I am curious as to how enums compile in the final code- #defines are just string replacements, but do enums add anything to the compiled binary? Or are they both equivalent at that stage. When writing firmware and memory is very limited, is there any advantage, no matter how small, to using #defines? Thanks! EDIT: As requested by the comment below, by embedded, I mean a digital camera. Thanks for the answers! I am all for enums!

    Read the article

  • Path vs GeometryDrawing

    - by Carlo
    Just wondering what's lighter, I'm going to have a control that draws 280 * 4 my SegmentControl, which is a quarter of a circle, and I'm just wondering what's the way that takes least memory to draw said segment. GeometryDrawing: <Image> <Image.Source> <DrawingImage> <DrawingImage.Drawing> <GeometryDrawing Brush="LightBlue" Geometry="M24.612317,0.14044853 C24.612317,0.14044853 33.499971,-0.60608719 41,7.0179795 48.37642,14.516393 47.877537,23.404541 47.877537,23.404541 L24.60978,23.401991 z" /> </DrawingImage.Drawing> </DrawingImage> </Image.Source> </Image> Or Path: <Path Fill="LightBlue" Stretch="Fill" Stroke="#FF0DA17D" Data="M24.612317,0.14044853 C24.612317,0.14044853 33.499971,-0.60608719 41,7.0179795 48.37642,14.516393 47.877537,23.404541 47.877537,23.404541 L24.60978,23.401991 z" /> Or if you know of an even better way, it'll be much appreciated. Thanks!

    Read the article

  • Clone LINQ To SQL object Extension Method throws object dispose exception....

    - by gtas
    Hello all, I have this extension method for cloning my LINQ To SQL objects: public static T CloneObjectGraph<T>(this T obj) where T : class { var serializer = new DataContractSerializer(typeof(T), null, int.MaxValue, false, true, null); using (var ms = new System.IO.MemoryStream()) { serializer.WriteObject(ms, obj); ms.Position = 0; return (T)serializer.ReadObject(ms); } } But while i carry objects with not all references loaded, while qyuerying with DataLoadOptions, sometimes it throws the object disposed exception, but thing is I don't ask for references that is not loaded (null). e.g. I have Customer with many references and i just need to carry on memory the Address reference EntityRef< and i don't Load anything else. But while i clone the object this exception forces me to load all the EntitySet< references with the Customer object, which might be too much and slow down the application speed. Any suggestions?

    Read the article

  • Compression Array of Bytes

    - by Pedro Magalhaes
    Hi, My problem is: I want to store a array of bytes in compressed file, and then I want to read it with a good performance. So I create a array of bytes then pass to a ZLIB algorithm then store it in the file. For my surprise the algorithm doesn't work well., probably because the array is a random sample. Using this approach, it will will be ber easy to read. Just copy the stream to memory, decompress them and copy it to a array of bytes. But i need to compress the file. Do I have to use a algorithm, like RLE, for compresse the byte array? I think that I can store the byte array like a string and then compress it. But i think I am going to have a poor performance on reading data. Sorry for my poor english. Thanks

    Read the article

  • scala REPL is slow on vista

    - by Jacques René Mesrine
    I installed scala-2.8.0.RC3 by extracting the tgz file into my cygwin (vista) home directory. I made sure to set $PATH to scala-2.8.0.RC3/bin. I start the REPL by typing: $ scala Welcome to Scala version 2.8.0.RC3 (Java HotSpot(TM) Client VM, Java 1.6.0_20). Type in expressions to have them evaluated. Type :help for more information. scala> Now when I tried to enter an expression scala> 1 + 'a' the cursor hangs there without any response. Granted that I have chrome open with a million tabs and VLC playing in the background, but CPU utilization was 12% and virtual memory was about 75% utilized. What's going on ? Do I have to set the CLASSPATH or perform other steps.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >