Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 827/880 | < Previous Page | 823 824 825 826 827 828 829 830 831 832 833 834  | Next Page >

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • ASP.NET AJAX Doesnt Work

    - by Petras
    I have a very simple AJAX example that doesn't work. It is from the Microsoft tutorials on AJAX. When I click on button "Button1" AJAX should execute but the whole page submits. Here is the code: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="1111.aspx.cs" Inherits="_1111" %> <%@ Register Assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Namespace="System.Web.UI" TagPrefix="asp" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <p> DropDownList AutoPostBack SelectedIndexChanged EventArgs Sort ... Since you will be using AJAX to process your SelectedIndexChanged event, set the AutoPostBack property of the DropDownList to false. ...</p> <div> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true"> </asp:ScriptManager> <asp:Label ID="label2" runat="server"></asp:Label><br /> <asp:Label ID="label3" runat="server"></asp:Label><br /> <center> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Label ID="label1" runat="server"></asp:Label> <asp:Button ID="Button1" runat="server" OnClick="Button1_Click" Text="Button 1" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="Button1" EventName="Click" /> </Triggers> </asp:UpdatePanel></center> </div> </form> </body> </html> using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class _1111 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { label1.Text = System.DateTime.Now.ToString(); label2.Text = System.DateTime.Now.ToString(); label3.Text = System.DateTime.Now.ToString(); } protected void Button1_Click(object sender, EventArgs e) { label1.Text = System.DateTime.Now.ToString(); } }

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • Avoiding stack overflows in wrapper DLLs

    - by peachykeen
    I have a program to which I'm adding fullscreen post-processing effects. I do not have the source for the program (it's proprietary, although a developer did send me a copy of the debug symbols, .map format). I have the code for the effects written and working, no problems. My issue now is linking the two. I've tried two methods so far: Use Detours to modify the original program's import table. This works great and is guaranteed to be stable, but the user's I've talked to aren't comfortable with it, it requires installation (beyond extracting an archive), and there's some question if patching the program with Detours is valid under the terms of the EULA. So, that option is out. The other option is the traditional DLL-replacement. I've wrapped OpenGL (opengl32.dll), and I need the program to load my DLL instead of the system copy (just drop it in the program folder with the right name, that's easy). I then need my DLL to load the Cg framework and runtime (which relies on OpenGL) and a few other things. When Cg loads, it calls some of my functions, which call Cg functions, and I tend to get stack overflows and infinite loops. I need to be able to either include the Cg DLLs in a subdirectory and still use their functions (not sure if it's possible to have my DLLs import table point to a DLL in a subdirectory) or I need to dynamically link them (which I'd rather not do, just to simplify the build process), something to force them to refer to the system's file (not my custom replacement). The entire chain is: Program loads DLL A (named opengl32.dll). DLL A loads Cg.dll and dynamically links (GetProcAddress) to sysdir/opengl32.dll. I now need Cg.dll to also refer to sysdir/opengl32.dll, not DLL A. How would this be done? Edit: How would this be done easily without using GetProcAddress? If nothing else works, I'm willing to fall back to that, but I'd rather not if at all possible. Edit2: I just stumbled across the function SetDllDirectory in the MSDN docs (on a totally unrelated search). At first glance, that looks like what I need. Is that right, or am I misjudging? (off to test it now) Edit3: I've solved this problem by doing thing a bit differently. Instead of dropping an OpenGL32.dll, I've renamed my DLL to DInput.dll. Not only does it have the advantage of having to export one function instead of well over 120 (for the program, Cg, and GLEW), I don't have to worry about functions running back in (I can link to OpenGL as usual). To get into the calls I need to intercept, I'm using Detours. All in all, it works much better. This question, though, is still an interesting problem (and hopefully will be useful for anyone else trying to do crazy things in the future). Both the answers are good, so I'm not sure yet which to pick...

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • String manipulation in Linux kernel module

    - by user577066
    I am having a hard time in manipulating strings while writing module for linux. My problem is that I have a int Array[10] with different values in it. I need to produce a string to be able send to the buffer in my_read procedure. If my array is {0,1,112,20,4,0,0,0,0,0} then my output should be: 0:(0) 1:-(1) 2:-------------------------------------------------------------------------------------------------------(112) 3:--------------------(20) 4:----(4) 5:(0) 6:(0) 7:(0) 8:(0) 9:(0) when I try to place the above strings in char[] arrays some how weird characters end up there here is the code int my_read (char *page, char **start, off_t off, int count, int *eof, void *data) { int len; if (off > 0){ *eof =1; return 0; } /* get process tree */ int task_dep=0; /* depth of a task from INIT*/ get_task_tree(&init_task,task_dep); char tmp[1024]; char A[ProcPerDepth[0]],B[ProcPerDepth[1]],C[ProcPerDepth[2]],D[ProcPerDepth[3]],E[ProcPerDepth[4]],F[ProcPerDepth[5]],G[ProcPerDepth[6]],H[ProcPerDepth[7]],I[ProcPerDepth[8]],J[ProcPerDepth[9]]; int i=0; for (i=0;i<1024;i++){ tmp[i]='\0';} memset(A, '\0', sizeof(A));memset(B, '\0', sizeof(B));memset(C, '\0', sizeof(C)); memset(D, '\0', sizeof(D));memset(E, '\0', sizeof(E));memset(F, '\0', sizeof(F)); memset(G, '\0', sizeof(G));memset(H, '\0', sizeof(H));memset(I, '\0', sizeof(I));memset(J, '\0', sizeof(J)); printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%s\n",A,B,C,D,E,F,G,H,I,J); memset(A,'-',sizeof(A)); memset(B,'-',sizeof(B)); memset(C,'-',sizeof(C)); memset(D,'-',sizeof(D)); memset(E,'-',sizeof(E)); memset(F,'-',sizeof(F)); memset(G,'-',sizeof(G)); memset(H,'-',sizeof(H)); memset(I,'-',sizeof(I)); memset(J,'-',sizeof(J)); printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%\n",A,B,C,D,E,F,G,H,I,J); len = sprintf(page,"0:%s(%d)\n1:%s(%d)\n2:%s(%d)\n3:%s(%d)\n4:%s(%d)\n5:%s(%d)\n6:%s(%d)\n7:%s(%d)\n8:%s(%d)\n9:%s(%d)\n",A,ProcPerDepth[0],B,ProcPerDepth[1],C,ProcPerDepth[2],D,ProcPerDepth[3],E,ProcPerDepth[4],F,ProcPerDepth[5],G,ProcPerDepth[6],H,ProcPerDepth[7],I,ProcPerDepth[8],J,ProcPerDepth[9]); return len; }

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • send sms in asp.net by gsm modem

    - by danar jabbar
    I devloped an asp.net application to send sms from gsm modem to destination base on URL from the browser I used a library that developed by codeproject http://www.codeproject.com/articles/20420/how-to-send-and-receive-sms-using-gsm-modem but I get problem when I request form two browsers at same time and I want to make the my code detect that the modem is use by another process at the time here is my code: DeviceConnection deviceConnection = new DeviceConnection(); protected void Page_Load(object sender, EventArgs e) { try { if (Request.QueryString["destination"] != null && Request.QueryString["text"] != null) { deviceConnection.setBaudRate(9600); deviceConnection.setPort(12); deviceConnection.setTimeout(200); SendSms sendSms = new SendSms(deviceConnection); if (deviceConnection.getConnectionStatus()) { sendSms.strReciverNo = Request.QueryString["destination"]; sendSms.strTextMessage = Request.QueryString["text"]; if (sendSms.sendSms()) { Response.Write("Mesage successfuly sent to " + Request.QueryString["destination"]); } else { Response.Write("Message was not sent"); } } } } catch (Exception ex) { Console.WriteLine("Index "+ex.StackTrace); } } This is SendSms class: class SendSms { DeviceConnection deviceConnection; public SendSms(DeviceConnection deviceConnection) { this.deviceConnection = deviceConnection; } private string reciverNo; private string textMessage; private delegate void SetTextCallback(string text); public string strReciverNo { set { this.reciverNo = value; } get { return this.reciverNo; } } public string strTextMessage { set { this.textMessage = value; } get { return this.textMessage; } } public bool sendSms() { try { CommSetting.Comm_Port = deviceConnection.getPort();//GsmCommMain.DefaultPortNumber; CommSetting.Comm_BaudRate = deviceConnection.getBaudRate(); CommSetting.Comm_TimeOut = deviceConnection.getTimeout();//GsmCommMain.DefaultTimeout; CommSetting.comm = new GsmCommMain(deviceConnection.getPort() , deviceConnection.getBaudRate(), deviceConnection.getTimeout()); // CommSetting.comm.PhoneConnected += new EventHandler(comm_PhoneConnected); if (!CommSetting.comm.IsOpen()) { CommSetting.comm.Open(); } SmsSubmitPdu smsSubmitPdu = new SmsSubmitPdu(strTextMessage, strReciverNo, ""); smsSubmitPdu.RequestStatusReport = true; CommSetting.comm.SendMessage(smsSubmitPdu); CommSetting.comm.Close(); return true; } catch (Exception exception) { Console.WriteLine("sendSms " + exception.StackTrace); CommSetting.comm.Close(); return false; } } public void recive(object sender, EventArgs e) { Console.WriteLine("Message received successfuly"); } } }

    Read the article

  • hibernate3-maven-plugin: entiries in different maven projects, hbm2ddl fails

    - by Mike
    I'm trying to put an entity in a different maven project. In the current project I have: @Entity public class User { ... private FacebookUser facebookUser; ... public FacebookUser getFacebookUser() { return facebookUser; } ... public void setFacebookUser(FacebookUser facebookUser) { this.facebookUser = facebookUser; } Then FacebookUser (in a different maven project, that's a dependency of a current project) is defined as: @Entity public class FacebookUser { ... @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } Here is my maven hibernate3-maven-plugin configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <ejb3>false</ejb3> <persistenceunit>Default</persistenceunit> <outputfilename>schema.ddl</outputfilename> <drop>false</drop> <create>true</create> <export>false</export> <format>true</format> </componentProperties> </configuration> </plugin> Here is the error I'm getting: org.hibernate.MappingException: Could not determine type for: com.xxx.facebook.model.FacebookUser, at table: user, for columns: [org.hibernate.mapping.Column(facebook_user)] I know that FacebookUser is on the classpath because if I make facebook user transient, project compiles fine: @Transient public FacebookUser getFacebookUser() { return facebookUser; }

    Read the article

  • How to configure the framesize using AudioUnit.framework on iOS

    - by Piperoman
    I have an audio app i need to capture mic samples to encode into mp3 with ffmpeg First configure the audio: /** * We need to specifie our format on which we want to work. * We use Linear PCM cause its uncompressed and we work on raw data. * for more informations check. * * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz */ AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = SAMPLE_RATE; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame*sizeof(SInt16); audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame*sizeof(SInt16); The recording callback is: static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { NSLog(@"Log record: %lu", inBusNumber); NSLog(@"Log record: %lu", inNumberFrames); NSLog(@"Log record: %lu", (UInt32)inTimeStamp); // the data gets rendered here AudioBuffer buffer; // a variable where we check the status OSStatus status; /** This is the reference to the object who owns the callback. */ AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; /** on this point we define the number of channels, which is mono for the iphone. the number of frames is usally 512 or 1024. */ buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size buffer.mNumberChannels = 1; // one channel buffer.mData = malloc( inNumberFrames * sizeof(SInt16) ); // buffer size // we put our buffer into a bufferlist array for rendering AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0] = buffer; // render input and check for error status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); [audioProcessor hasError:status:__FILE__:__LINE__]; // process the bufferlist in the audio processor [audioProcessor processBuffer:&bufferList]; // clean up the buffer free(bufferList.mBuffers[0].mData); //NSLog(@"RECORD"); return noErr; } With data: inBusNumber = 1 inNumberFrames = 1024 inTimeStamp = 80444304 // All the time same inTimeStamp, this is strange However, the framesize that i need to encode mp3 is 1152. How can i configure it? If i do buffering, that implies a delay, but i would like to avoid this because is a real time app. If i use this configuration, each buffer i get trash trailing samples, 1152 - 1024 = 128 bad samples. All samples are SInt16.

    Read the article

  • c++ exceptions and program execution logic

    - by Andrew
    Hello everyone, I have been thru a few questions but did not find an answer. I wonder how should the exception handling be implemented in a C++ software so it is centralized and it is tracking the software progress? For example, I want to process exceptions at four stages of the program and know that exception happened at that specific stage: 1. Initialization 2. Script processing 3. Computation 4. Wrap-up. At this point, I tried this: int main (...) { ... // start logging system try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not start the logging system. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not start the logging system. Application terminated!\n"; return -2; } // open script file and acquire data try { ... } catch (exception &e) { cerr << "Error: " << e.what() << endl; cerr << "Could not acquire input parameters. Application terminated!\n"; return -1; } catch(...) { cerr << "Unknown error occured.\n"; cerr << "Could not acquire input parameters. Application terminated!\n"; return -2; } // computation try { ... } ... This is definitely not centralized and seems stupid. Or maybe it is not a good concept at all?

    Read the article

  • Optimizing jQuery for Tabs

    - by jpdbaugh
    I am in the process of developing a widget. The widget has three tabs that are implemented in the following way. <div id="widget"> <ul id="tabs"> <li><a href="http...">One</a></li> <li><a href="http...">Two</a></li> <li><a href="http...">Three</a></li> </ul> <div id="tab_container"> <div id="tab_content"> //Tab Content goes here... </div> </div> </div> // The active class is initialized when the document loads $("#tabs li a").click(function() { $("#tabs li.active").removeClass("active"); $("#tab_content").load($(this).attr('href')); $(this).parent().addClass("active"); return false; }); The problem I am having is that the jquery code that have written is very slow. If the user changes tabs quickly the widget gets behing and bogged down. This causes the tabs to to not align with the data being displayed and just general lag. I believe that this is because the tab is being changed before $.load() is finished. I have tried to implement the following: ("#tabs li a").click(function() { $("#tabs li.active").removeClass("active"); $("#tab_content").load($(this).attr('href'), function (){ $(this).parent().addClass("active"); }); return false; }); It is my understanding that the callback function within in the load function does not execute until the load function is completed. I think this would solve my problem, however I can not come up with a way to select the correct tab that was clicked within the callback function. If this is not the way to do this then what is the best way implement these tabs so that they would stop loading an old request and load the newest tab selection by the user? Thanks

    Read the article

  • Error in jquery attribute selector and IE6-7

    - by Guillermo Vasconcelos
    Hi, I'm trying to implement a JQuery script to process some areas inside an image map. I'm using $('area[shape="poly"]') as a selector to obtain the areas I'm interested in. It is working fine in IE8 and Firefox, but it is not selecting the elements in IE6 or IE7. This is a test page that shows this problem. I don't know if this is a JQuery bug or I am doing something wrong. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { var areas = $('area[shape="poly"]'); alert('areas: ' + areas.length); }); //]]> </script> <title>Test</title> </head> <body> <img id="img1" src="nothing.gif" style="width:300px; height:300px; border: 2px solid black" usemap="#map1"/> <map id="map1"> <area shape="rect" title="rectArea" coords="126,112,231,217" alt=""/> <area shape="poly" title="polyArea1" coords="274,72,262,70,251,68,240,67,228,66,217,67,206,68,194,70,183,72,181,63,192,60,204,58,216,57,228,56,240,57,252,58,264,60,276,63" alt=""/> <area shape="poly" title="polyArea2" coords="241,194,235,193,228,193,222,193,216,194,196,119,204,117,212,116,220,115,228,115,237,115,245,116,253,117,261,119" alt=""/> </map> </body> </html> This is showing 2 in IE8 and Firefox and 0 in IE6-7 Thanks, Guillermo

    Read the article

  • Calculating a Sample Covariance Matrix for Groups with plyr

    - by John A. Ramey
    I'm going to use the sample code from http://gettinggeneticsdone.blogspot.com/2009/11/split-apply-and-combine-in-r-using-plyr.html for this example. So, first, let's copy their example data: mydata=data.frame(X1=rnorm(30), X2=rnorm(30,5,2), SNP1=c(rep("AA",10), rep("Aa",10), rep("aa",10)), SNP2=c(rep("BB",10), rep("Bb",10), rep("bb",10))) I am going to ignore SNP2 in this example and just pretend the values in SNP1 denote group membership. So then, I may want some summary statistics about each group in SNP1: "AA", "Aa", "aa". Then if I want to calculate the means for each variable, it makes sense (modifying their code slightly) to use: > ddply(mydata, c("SNP1"), function(df) data.frame(meanX1=mean(df$X1), meanX2=mean(df$X2))) SNP1 meanX1 meanX2 1 aa 0.05178028 4.812302 2 Aa 0.30586206 4.820739 3 AA -0.26862500 4.856006 But what if I want the sample covariance matrix for each group? Ideally, I would like a 3D array, where the I have the covariance matrix for each group, and the third dimension denotes the corresponding group. I tried a modified version of the previous code and got the following results that have convinced me that I'm doing something wrong. > daply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) , , = 1 SNP1 1 2 aa 1.4961210 -0.9496134 Aa 0.8833190 -0.1640711 AA 0.9942357 -0.9955837 , , = 2 SNP1 1 2 aa -0.9496134 2.881515 Aa -0.1640711 2.466105 AA -0.9955837 4.938320 I was thinking that the dim() of the 3rd dimension would be 3, but instead, it is 2. Really this is a sliced up version of the covariance matrix for each group. If we manually compute the sample covariance matrix for aa, we get: [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 Using plyr, the following gives me what I want in list() form: > dlply(mydata, c("SNP1"), function(df) cov(cbind(df$X1, df$X2))) $aa [,1] [,2] [1,] 1.4961210 -0.9496134 [2,] -0.9496134 2.8815146 $Aa [,1] [,2] [1,] 0.8833190 -0.1640711 [2,] -0.1640711 2.4661046 $AA [,1] [,2] [1,] 0.9942357 -0.9955837 [2,] -0.9955837 4.9383196 attr(,"split_type") [1] "data.frame" attr(,"split_labels") SNP1 1 aa 2 Aa 3 AA But like I said earlier, I would really like this in a 3D array. Any thoughts on where I went wrong with daply() or suggestions? Of course, I could typecast the list from dlply() to a 3D array, but I'd rather not do this because I will be repeating this process many times in a simulation. As a side note, I found one method (http://www.mail-archive.com/[email protected]/msg86328.html) that provides the sample covariance matrix for each group, but the outputted object is bloated. Thanks in advance.

    Read the article

  • Garbage Collector not doing its job. Memory Consumption = 1.5GB & OutOFMemory Exception.

    - by imageWorker
    I'm working with images (each of size = 5MB). The following code extract some information from each image that is present in the given directory. I'm getting out of memory exception. The size of the process is around (1.5GB). I don't know why garbage collector is not freeing memory. I even tried adding GC.Collect() as last line of foreach loop. Still I'm getting 'OutOFMemory' using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.IO; using System.Drawing; using System.Drawing.Imaging; namespace TrainSVM { class Program { static void Main(string[] args) { FileStream fs = new FileStream("dg.train",FileMode.OpenOrCreate,FileAccess.Write); StreamWriter sw = new StreamWriter(fs); String[] filePathArr = Directory.GetFiles("E:\\images\\"); foreach (string filePath in filePathArr) { if (filePath.Contains("lmn")) { sw.Write("1 "); Console.Write("1 "); } else { sw.Write("1 "); Console.Write("1 "); } Bitmap originalBMP = new Bitmap(filePath); /***********************/ Bitmap imageBody; ImageBody.ImageBody im = new ImageBody.ImageBody(originalBMP); imageBody = im.GetImageBody(-1); /* white coat */ Bitmap whiteCoatBitmap = Rgb2Hsi.Rgb2Hsi.GetHuePlane(imageBody); float WhiteCoatPixelPercentage = Rgb2Hsi.Rgb2Hsi.GetWhiteCoatPixelPercentage(whiteCoatBitmap); //Console.Write("whiteDone\t"); sw.Write("1:" + WhiteCoatPixelPercentage + " "); Console.Write("1:" + WhiteCoatPixelPercentage + " "); /******************/ Quaternion.Quaternion qtr = new Quaternion.Quaternion(-15); Bitmap yellowCoatBMP = qtr.processImage(imageBody); //yellowCoatBMP.Save("yellowCoat.bmp"); float yellowCoatPixelPercentage = qtr.GetYellowCoatPixelPercentage(yellowCoatBMP); //Console.Write("yellowCoatDone\t"); sw.Write("2:" + yellowCoatPixelPercentage + " "); Console.Write("2:" + yellowCoatPixelPercentage + " "); /**********************/ Bitmap balckPatchBitmap = BlackPatchDetection.BlackPatchDetector.MarkBlackPatches(imageBody); float BlackPatchPixelPercentage = BlackPatchDetection.BlackPatchDetector.BlackPatchPercentage; //Console.Write("balckPatchDone\n"); sw.Write("3:" + BlackPatchPixelPercentage + "\n"); Console.Write("3:" + BlackPatchPixelPercentage + "\n"); balckPatchBitmap.Dispose(); yellowCoatBMP.Dispose(); whiteCoatBitmap.Dispose(); originalBMP.Dispose(); sw.Flush(); } sw.Dispose(); fs.Dispose(); } } }

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • What facets have I missed for creating a 3 person guerilla dev team?

    - by Penguinix
    Sorry for the Windows developers out there, this solution is for Macs only. This set of applications accounts for: Usability Testing, Screen Capture (Video and Still), Version Control, Task Lists, Bug Tracking, a Developer IDE, a Web Server, A Blog, Shared Doc Editing on the Web, Team and individual Chat, Email, Databases and Continuous Integration. This does assume your team members provide their own machines, and one person has a spare old computer to be the Source Repository and Web Server. All for under $200 bucks. Usability Silverback Licenses = 3 x $49.95 "Spontaneous, unobtrusive usability testing software for designers and developers." Source Control Server and Clients (multiple options) Subversion = Free Subversion is an open source version control system. Versions (Currently in Beta) = Free Versions provides a pleasant work with Subversion on your Mac. Diffly = Free "Diffly is a tool for exploring Subversion working copies. It shows all files with changes and, clicking on a file, shows a highlighted view of the changes for that file. When you are ready to commit Diffly makes it easy to select the files you want to check-in and assemble a useful commit message." Bug/Feature/Defect Tracking (multiple options) Bugzilla = Free Bugzilla is a "Defect Tracking System" or "Bug-Tracking System". Defect Tracking Systems allow individual or groups of developers to keep track of outstanding bugs in their product effectively. Most commercial defect-tracking software vendors charge enormous licensing fees. Trac = Free Trac is an enhanced wiki and issue tracking system for software development projects. Database Server & Clients MySQL = Free CocoaMySQL = Free Web Server Apache = Free Development and Build Tools XCode = Free CruiseControl = Free CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Collaboration Tools Writeboard = Free Ta-da List = Free Campfire Chat for 4 users = Free WordPress = Free "WordPress is a state-of-the-art publishing platform with a focus on aesthetics, web standards, and usability. WordPress is both free and priceless at the same time." Gmail = Free "Gmail is a new kind of webmail, built on the idea that email can be more intuitive, efficient, and useful." Screen Capture (Video / Still) Jing = Free "The concept of Jing is the always-ready program that instantly captures and shares images and video…from your computer to anywhere." Lots of great responses: TeamCity [Yo|||] Skype [Eric DeLabar] FogBugz [chakrit] IChatAV and Screen Sharing (built-in to OS) [amrox] Google Docs [amrox]

    Read the article

  • string categorization strategies

    - by Andrew Heath
    I'm the one-man dev team on a fledgling military history website. One aspect of the site is a catalog of ~1,200 individual battles, including the nations & formations (regiments, divisions, etc) which took part. The formation information (as well as the other battle info) was manually imported from a series of books by a 10-man volunteer team. The formations were listed in groups with varying formatting and abbreviation patterns. At the time I set up the data collection forms I couldn't think of a good way to process that data... and elected to store it all as strings in the MySQL database and sort it out later. Well, "later" - as it tends to happen - has arrived. :-) Each battle has 2+ records in the database - one for each nation that participated. Each record has a formations text string listing the formations present as the volunteer chose to add them. Some real examples: 39th Grenadier Rgmt, 26th Volksgrenadier Division 2nd Luftwaffe Field Division, 246th Infantry Division 247th Rifle Division, 255th Tank Brigade 2nd Luftwaffe Field Division, SS Cavalry Division 28th Tank Brigade, 158th Rifle Division, 135th Rifle Division, 81st Tank Brigade, 242nd Tank Brigade 78th Infantry Division 3rd Kure Special Naval Landing Force, Tulagi Seaplane Base personnel 1st Battalion 505th Infantry Regiment The ultimate goal is for each individual force to have an ID, so that its participation can be traced throughout the battle database. Formation hierarchy, such as the final item above 1st Battalion (of the) 505th Infantry Regiment also needs to be preserved. In that case, 1st Battalion and 505th Infantry Regiment would be split, but 1st Battalion would be flagged as belonging to the 505th. In database terms, I think I want to pull the formation field out of the current battle info table and create three new tables: FORMATION [id] [name] FORMATION_HIERARCHY [id] [parent] [child] FORMATION_BATTLE [f_id] [battle_id] It's simple to explain, but complicated to enact. What I'm looking for from the SO community is just some tips on how best to tackle this problem. Ideally there's some sort of method to solving this that I'm not aware of. However, as a last resort, I could always code a classification framework and call my volunteers back to sort through 2,500+ records...

    Read the article

  • How to set which version of the VC++ runtime Visual Studio 2005 targets

    - by TallGuy
    I have an application that contains a VC++ project (along with C# projects). Previously, (i.e. during the last year or so) when a build has been done, Visual Studio 2005 appears to be targeting the VC++ runtime version 8.0.50727.762. At least, that is what the Assembly.dll.intermediate.manifest file is telling me: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> This version number matches the Visual Studio 2005 version number. The application worked fine when deployed to the webserver. The sun was shining, the birds were singing and all was right with the world. Now something has changed. I don't know what - a security patch, an obscure Visual Studio setting or something. Now Visual Studio 2005 seems to be targeting the wrong version of the VC++ runtime: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.4053' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> When I deploy the application to the webserver, I get the dreaded This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) error. This problem occurs even when I recompile previous versions of the application. I can absolutely guarantee that nothing at all has changed in the solution - we zip up the entire contents of the solution as part of the build process and archive it. I have unzipped a number of these to a temp directory, verified that the previous manifest file refers to 8.0.50727.762, recompiled using exactly the same command at the command line and then verified that the new manifest file now refers to 8.0.50727.4053. I am using Microsoft Visual Studio 2005 Version 8.0.50727.762 (SP.050727-7600) and Microsoft Visual C++ 2005 77646-008-0000007-41610. Why would Visual Studio revert to a previous version of the VC++ runtime? How do I specify which version it should use? What is going wrong here?

    Read the article

  • Getting Started with Maven + Jaxb project + IntellijIdea

    - by Em Ae
    I am complete new to IntellijIdea and i am looking for some step-by-step process to set up a basic project. My project depends on Maven + Jaxb classes so i need a Maven project so that when i compile this project, the JAXB Objects are generated by Maven plugins. Now i started like this I created a blank project say MaJa project Added Maven Module to it Added following settings in POM.XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>MaJa</groupId> <artifactId>MaJa</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resource/api/MaJa</schemaDirectory> <packageName>com.rimt.shopping.api.web.ws.v1.model</packageName> <outputDirectory>${build.directory}</outputDirectory> </configuration> </plugin> </plugins> </build> </project> First of all, is it right settings ? I tried clicking on Make/Compile 'MaJa' from Project Right Click Menu and it didn't do anything. I will be looking forward to yoru replies.

    Read the article

  • Is it possible to read infinity or NaN values using input streams?

    - by Drise
    I have some input to be read by a input filestream (for example): -365.269511 -0.356123 -Inf 0.000000 When I use std::ifstream mystream; to read from the file to some double d1 = -1, d2 = -1, d3 = -1, d4 = -1; (assume mystream has already been opened and the file is valid), mystream >> d1 >> d2 >> d3 >> d4; mystream is in the fail state. I would expect std::cout << d1 << " " << d2 << " " << d3 << " " << d4 << std::endl; to output -365.269511 -0.356123 -1 -1. I would want it to output -365.269511 -0.356123 -Inf 0 instead. This set of data was output using C++ streams. Why can't I do the reverse process (read in my output)? How can I get the functionality I seek? From MooingDuck: #include <iostream> #include <limits> using namespace std; int main() { double myd = std::numeric_limits<double>::infinity(); cout << myd << '\n'; cin >> myd; cout << cin.good() << ":" << myd << endl; return 0; } Input: inf Output: inf 0:inf See also: http://ideone.com/jVvei Also related to this problem is NaN parsing, even though I do not give examples for it.

    Read the article

  • How does the rsync algorithm correctly identify repeating blocks?

    - by Kai
    I'm on a personal quest to learn how the rsync algorithm works. After some reading and thinking, I've come up with a situation where I think the algorithm fails. I'm trying to figure out how this is resolved in an actual implementation. Consider this example, where A is the receiver and B is the sender. A = abcde1234512345fghij B = abcde12345fghij As you can see, the only change is that 12345 has been removed. Now, to make this example interesting, let's choose a block size of 5 bytes (chars). Hashing the values on the sender's side using the weak checksum gives the following values list. abcde|12345|fghij abcde -> 495 12345 -> 255 fghij -> 520 values = [495, 255, 520] Next we check to see if any hash values differ in A. If there's a matching block we can skip to the end of that block for the next check. If there's a non-matching block then we've found a difference. I'll step through this process. Hash the first block. Does this hash exist in the values list? abcde -> 495 (yes, so skip) Hash the second block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the third block. Does this hash exist in the values list? 12345 -> 255 (yes, so skip) Hash the fourth block. Does this hash exist in the values list? fghij -> 520 (yes, so skip) No more data, we're done. Since every hash was found in the values list, we conclude that A and B are the same. Which, in my humble opinion, isn't true. It seems to me this will happen whenever there is more than one block that share the same hash. What am I missing?

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

< Previous Page | 823 824 825 826 827 828 829 830 831 832 833 834  | Next Page >