Search Results

Search found 13227 results on 530 pages for 'memory efficiency'.

Page 462/530 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Cannot read values from [NSUserDefaults standardUserDefaults] after synchronize

    - by Nonlinearsound
    In the Application Delegate didFinishLaunching method, I am using the following code to build up a new NSDictionary to be used as the new settings bundle for the user: NSNumber *testValue = (NSNumber*)[[NSUserDefaults standardUserDefaults] objectForKey:@"settingsversion"]; if (testValue == nil) { NSNumber *numNewDB = [NSNumber numberWithBool:NO]; NSNumber *numFirstUse = [NSNumber numberWithBool:YES]; NSDate *dateLastStatic = [NSDate date]; NSDate *dateLastMobile = [NSDate date]; NSNumber *numSettingsversion = [NSNumber numberWithFloat:1.0]; NSDictionary *appDefaults = [NSDictionary dictionaryWithObjectsAndKeys: numNewDB, @"newdb", numFirstUse, @"firstuse", numSettingsversion, @"settingsversion", dateLastStatic, @"laststaticupdate", dateLastMobile, @"lastmobileupdate", nil]; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; [[NSUserDefaults standardUserDefaults] synchronize]; } Later in another ViewController I am trying to read back a value from that same Dictionary, saved as the NSUserDefaults - well at least I thought it would, but I don't get any valid object pointer for the desired member lastUpdate back there: in .h file: NSDate *lastUpdate; in the .m file in a member function: lastUpdate = (NSDate *)[[NSUserDefaults standardUserDefaults] objectForKey:@"laststaticupdate"]; Even, if I print out the content of [NSUserDefaults standardUserDefaults] I only get this: 2010-04-29 15:13:22.322 myApp[4136:207] Content of UserDefaults: <NSUserDefaults: 0x11d340> This leads me to the conclusion that there is no standardUserDefaults dictionary somewhere in memory or it cannot be determined as such a structure. Edit: Every time, I restart the app ión the device, the check for testValue is Nil and I am building up the dictionary again but after one run it should be persistent on the Phone, right? Am I doing something wrong somewhere in between? I have the feeling that I yet didn't really understand how to load and save settings persistent for a certain application on the iPhone. Is there anything I have to do additionally to this? Integrating a settings.bundle in XCode or saving the dictionary manually to the Documents folder? Can someone help me out here? Thanks a lot!

    Read the article

  • Need Advice on designing ATL inproc Server (dll) that serves as both a soure and a sink of events.

    - by Andrew
    Hi, I need to design an ATL inproc server that besides exposing methods and properties, also can fire events (source) and serve as a sink for a third party COM control that fires events. I would assume that this is a fairly common requirement. I can also foresee several "gotchas" that I would like to read up on before commencing the design. My questions/concerns are: Can someone point me to an example? Which threading model to use? Should I have a seperate COM object for the sink? Should I, and how do I, protect certain memory. For example, my server will receive data from the third party control. It will save this, and in some cases, fire an event to interested clients. The interested clients will request the data through a standard method or property. I did try to research this myself. I can find many examples of COM servers that are soures, and some that are sinks, but never both. The only post I did find was this: http://www.generation-nt.com/us/atl-control-an-event-source-sink-help-9098542.html Which strongly advocates putting the sink on a seperate COM object. Any leads, tutorials or ideas would be much appreciated. Thanks, Andrew

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • JConsole does not connect when a program is waiting for a console input

    - by Calm Storm
    Hi, I have a problem similar to the one mentioned here. I have a program which launches a selenium browser and refreshes it every 2 seconds. My idea here was to monitor the memory usage of my application as the page is being hit. This console program does a "System.in.read" and when a user presses enter key, this stops the browser refresh thread. I run the program and it runs as I would expect it to. But when I fire jconsole and attach it to the process, it looks like jconsole waits forever to attach. This is because of the "System.in.read", apparently as shown in the JConsole source. In my code it makes perfect sense for me to terminte the browser when user presses something so is there any alternative available at all ? (Posted code sample below) package com.ekanathk; import java.util.ArrayList; import java.util.List; import org.junit.Test; public class MultiThread { @Test public void testSimple() throws Exception { List<Integer> x = new ArrayList<Integer>(); for(int i = 0; i < 1000; i++) { x.add(i); } System.out.print("Press Enter to stop (try attach a jconsole now...."); System.in.read(); } } The problem here seems to be that any System.in.read (either in Main thread or any threads) seems to completely block jconsole. I guess the problem essentially boils down to multiple threads requesting system inputs at the same time? Is there a solution ?

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • ASP.NET 2.0 and COM Port Communication

    - by theaviator
    ASP.NET 2.0 and COM Port Communication Hello Guys, I have a managed DLL which communicates with the devices attached on COM/Serial ports. The desktop Winforms application sends requests on ports and receives/stores data in memory. In Winforms app I have added a reference to DLL and I am using the methods. This works well. Now, there is a situation where I need to show this data from serial/com port on a web-page. And also users should be able to send requests to the ports using this DLL. I have made a web app in ASP.NET (2.0). Added a reference to the DLL. I am able to use this DLL, the DLL communicates on the COM upon button click on web-page and also the response is shown on web page. However I am not happy with the approach and strongly feel that this is a bad approach. Also the development server crashes after 3 -4 requests. What is the best approach in this scenario. If I use a windows service then how would my ASP.net app will communicate with the Weindows service. Or can this be easily done using WCF. I have not used WCF any time nor any of .net remoting technique. Please suggest me the best architecture in this scenario. Thank you

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • Please critisize this method

    - by Jakob
    Hi I've been looking around the net for some tab button close functionality, but all those solutions had some complicated eventhandler, and i wanted to try and keep it simple, but I might have broken good code ethics doing so, so please review this method and tell me what is wrong. public void AddCloseItem(string header, object content){ //Create tabitem with header and content StackPanel headerPanel = new StackPanel() { Orientation = Orientation.Horizontal, Height = 14}; headerPanel.Children.Add(new TextBlock() { Text = header }); Button closeBtn = new Button() { Content = new Image() { Source = new BitmapImage(new Uri("images/cross.png", UriKind.Relative)) }, Margin = new Thickness() { Left = 10 } }; headerPanel.Children.Add(closeBtn); TabItem newTabItem = new TabItem() { Header = headerPanel, Content = content }; //Add close button functionality closeBtn.Tag = newTabItem; closeBtn.Click += new RoutedEventHandler(closeBtn_Click); //Add item to list this.Add(newTabItem); } void closeBtn_Click(object sender, RoutedEventArgs e) { this.Remove((TabItem)((Button)sender).Tag); } So what I'm doing is storing the tabitem in the btn.Tag property, and then when the button is clicked i just remove the tabitem from my observablecollection, and the UI is updated appropriately. Am I using too much memory saving the tabitem to the Tag property?

    Read the article

  • C++ -- Is there an implicit cast here from Fred* to auto_ptr<Fred>?

    - by q0987
    Hello all, I saw the following code, #include <new> #include <memory> using namespace std; class Fred; // Forward declaration typedef auto_ptr<Fred> FredPtr; class Fred { public: static FredPtr create(int i) { return new Fred(i); // Is there an implicit casting here? If not, how can we return // a Fred* with return value as FredPtr? } private: Fred(int i=10) : i_(i) { } Fred(const Fred& x) : i_(x.i_) { } int i_; }; Please see the question listed in function create. Thank you // Updated based on comments Yes, the code cannot pass the VC8.0 error C2664: 'std::auto_ptr<_Ty::auto_ptr(std::auto_ptr<_Ty &) throw()' : cannot convert parameter 1 from 'Fred *' to 'std::auto_ptr<_Ty &' The code was copied from the C++ FAQ 12.15. However, after making the following changes, replace return new Fred(i); with return auto_ptr<Fred>(new Fred(i)); This code can pass the VC8.0 compiler. But I am not sure whether or not this is a correct fix.

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Behavior of retained property while holder is retained

    - by Aurélien Vallée
    Hello everyone, I am a beginner ObjectiveC programmer, coming from the C++ world. I find it very difficult to understand the memory management offered by NSObject :/ Say I have the following class: @interface User : NSObject { NSString* name; } @property (nonatomic,retain) NSString* name; - (id) initWithName: (NSString*) theName; - (void) release; @end @implementation User @synthesize name - (id) initWithName: (NSString*) theName { if ( self = [super init] ) { [self setName:theName]; } return self; } - (void) release { [name release]; [super release]; } @end No considering the following code, I can't understand the retain count results: NSString* name = [[NSString alloc] initWithCString:/*C string from sqlite3*/]; // (1) name retainCount = 1 User* user = [[User alloc] initWithName:name]; // (2) name retainCount = 2 [whateverMutableArray addObject:user]; // (3) name retainCount = 2 [user release]; // (4) name retainCount = 1 [name release]; // (5) name retainCount = 0 At (4), the retain count of name decreased from 2 to 1. But that's not correct, there is still the instance of user inside the array that points to name ! The retain count of a variable should only decrease when the retain count of a referring variable is 0, that is, when it is dealloced, not released.

    Read the article

  • c# CF Restart a thread

    - by Ed
    Hi all, Supose you have a form with a button that starts/stops a thread (NOT pausing or interrupting, I really need to stop the thread !) Check out this code: Constructor() { m_TestThread = new Thread(new ThreadStart(ButtonsThread)); m_bStopThread = false; } ButtonClick { // If the thread is not running, start it m_TestThread.Start(); // If the thread is running, stop it m_bStopThread = true; m_TestThread.Join(); } ThreadFunction() { while(!m_bStopThread) { // Do work } } 2 questions (remember CF): - How can I know if the thread is running (I cannot seem to access the m_pThreadState, and I've tried the C++ GetThreadExitCode(), it give false results) ? - Most important question : if I have stopped the thread, I cannot restart it, most probably because the m_TestThread.m_fStarted is still set (and it is private so I cannot access it) ! And thus m_TestThread.Start() generates an exception (StateException). Stopping the thread with an Abort() doesn't solve it. If I put my m_TestThread = null; it works, but then I create a memory leak. The GC doesn't clean up either, even if I wait for xx seconds. Anybody has an idea ? All help highly appreciated ! Grtz E

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Compiling C lib and OCaml exe using it, all using ocamlfind

    - by Magnus
    I'm trying to work out how to use ocamlfind to compile a C library and an OCaml executable using that C library. I put together a set of rather silly example files. % cat sillystubs.c #include <stdio.h> #include <caml/mlvalues.h> #include <caml/memory.h> #include <caml/alloc.h> #include <caml/custom.h> value caml_silly_silly( value unit ) { CAMLparam1( unit ); printf( "%s\n", __FILE__ ); CAMLreturn( Val_unit ); } % cat silly.mli external silly : unit -> unit = "silly_silly" % cat foo.ml open Silly open String let _ = print_string "About to call into silly"; silly (); print_string "Called into silly" I believe the following is the way to compile up the library: % ocamlfind ocamlc -c sillystubs.c % ar rc libsillystubs.a sillystubs.o % ocamlfind ocamlc -c silly.mli % ocamlfind ocmalc -a -o silly.cma -ccopt -L${PWD} -cclib -lsillystubs Now I don't seem to be able to use the created library though: % ocamlfind ocamlc -custom -o foo foo.cmo silly.cma /usr/bin/ld: cannot find -lsillystubs collect2: ld returned 1 exit status File "_none_", line 1, characters 0-1: Error: Error while building custom runtime system The OCaml tools are somewhat mysterious to me, so any pointers would be most welcome.

    Read the article

  • Why did this work with Visual C++, but not with gcc?

    - by Carlos Nunez
    I've been working on a senior project for the last several months now, and a major sticking point in our team's development process has been dealing wtih rifts between Visual-C++ and gcc. (Yes, I know we all should have had the same development environment.) Things are about finished up at this point, but I ran into a moderate bug just today that had me wondering whether Visual-C++ is easier on newbies (like me) by design. In one of my headers, there is a function that relies on strtok to chop up a string, do some comparisons and return a string with a similar format. It works a little something like the following: int main() { string a, b, c; //Do stuff with a and b. c = get_string(a,b); } string get_string(string a, string b) { const char * a_ch, b_ch; a_ch = strtok(a.c_str(),","); b_ch = strtok(b.c_str(),","); } strtok is infamous for being great at tokenizing, but equally great at destroying the original string to be tokenized. Thus, when I compiled this with gcc and tried to do anything with a or b, I got unexpected behavior, since the separator used was completely removed in the string. Here's an example in case I'm unclear; if I set a = "Jim,Bob,Mary" and b="Grace,Soo,Hyun", they would be defined as a="JimBobMary" and b="GraceSooHyun" instead of staying the same like I wanted. However, when I compiled this under Visual C++, I got back the original strings and the program executed fine. I tried dynamically allocating memory to the strings and copying them the "standard" way, but the only way that worked was using malloc() and free(), which I hear is discouraged in C++. While I'm curious about that, the real question I have is this: Why did the program work when compiled in VC++, but not with gcc? (This is one of many conflicts that I experienced while trying to make the code cross-platform.) Thanks in advance! -Carlos Nunez

    Read the article

  • Making two Windows using CreateWindowsEx() - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • Files not waiting for each other

    - by Sunny
    I have two batch files as follows in which file2.bat is dependent on file1.bat's output: file1.bat @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "Appprocess.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (App1 App2 App3 App4 App5 App6 ) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>result.txt GOTO :EOF file2.bat @echo off setlocal enabledelayedexpansion (for /f "tokens=1,2" %%a in (memory.txt) do ( for /f "tokens=5" %%c in ('find " %%a " ^< result.txt ') do echo %%c %%b ))> new.txt file1.bat usually takes 60 sec to complete its execution. In master.bat file i am calling above two files as: call file1.bat call file2.bat but file2.bat is not waiting for file1.bat to complete its execution. Even , i tried to call file2.bat within file1.bat as below but still its not waiting for file1.bat to get completed: @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "HsvDataSource.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (EUHFMPROD USHFMPROD TL2TEST GSHFMPROD TL2PROD GSARCH1213 TL2FY13) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>file2.txt GOTO :EOF call file1.bat I also tried below start option, but no effect.: start file1.bat /wait call file2.bat Not getting ..why its happening..?

    Read the article

  • Is there a PHP benchmark that meets these specific criteria? [closed]

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The PHP source needs to be in a single file. It should solve a real-world problem. No silly looping over empty methods etc. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Optimize Use of Ramdisk for Eclipse Development

    - by Eric J.
    We're developing Java/SpringSource applications with Eclipse on 32-bit Vista machines with 4GB RAM. The OS exposes roughly 3.3GB of RAM due to reservations for hardware etc. in the virtual address space. I came across several Ramdisk drivers that can create a virtual disk from the OS-hidden RAM and am looking for suggestions how best to use the 740MB virtual disk to speed development in our environment. The slowest part of development for us is compiling as well as launching SpringSource dm Server. One option is to configure Vista to swap to the Ramdisk. That works, and noticeably speeds up development in low memory situations. However, the 3.3GB available to the OS is often sufficient and there are many situations where we do not use the swap file(s) much. Another option is to use the Ramdisk as a location for temporary files. Using the Vista mklink command, I created a hard link from where the SpringSource dm Server's work area normally resides to the Ramdisk. That significantly improves server startup times but does nothing for compile times. There are roughly 500MB still free on the Ramdisk when the work directory is fully utilized, so room for plenty more. What other files/directories might be candidates to place on the Ramdisk? Eclipse-related files? (Parts of) the JDK? Is there a free/open source tool for Vista that will show me which files are used most frequently during a period of time to reduce the guesswork?

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

  • Eclipse does not start on Windows 7

    - by van
    Suddenly Eclipse today has decides to stop working. The last thing I did was close all perspectives and close Eclipse. When loading eclipse from the command prompt, using: "eclipse.exe -clean" the splash screen loads for a split second then exits. When I run the command: eclipsec -consoleLog -debug it results in the following output: Start VM: -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar -os win32 -ws win32 -arch x86_64 -showsplash d:\devtools\eclipse\\plugins\org.eclipse.platform_4.3.0.v20130605-20 00\splash.bmp -launcher d:\devtools\eclipse\eclipsec.exe -name Eclipsec --launcher.library d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher.win 32.win32.x86_64_1.1.200.v20130521-0416\eclipse_1503.dll -startup d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3.0.v201303 27-1440.jar --launcher.appendVmargs -product org.eclipse.epp.package.standard.product -consoleLog -debug -vm C:/Program Files/Java/jdk1.6.0_37/bin\..\jre\bin\server\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Dhelp.lucene.tokenizer=standard -Xms4096m -Xmx4096m -XX:MaxPermSize=512m -Djava.class.path=d:\devtools\eclipse\\plugins/org.eclipse.equinox.launcher_1.3. 0.v20130327-1440.jar Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified Checking Task Manager shows no Java process running and both the CPU and memory usage are very low. I have tried: Re-installing Eclipse Re-starting my machine But running eclipsec -consoleLog -debug from the command prompt still results in the issue: Error occurred during initialization of VM Incompatible minimum and maximum heap sizes specified

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Python: OSX Library for fast full screen jpg/png display

    - by Parand
    Frustrated by lack of a simple ACDSee equivalent for OS X, I'm looking to hack one up for myself. I'm looking for a gui library that accommodates: Full screen image display High quality image fit-to-screen (for display) Low memory usage Fast display Reasonable learning curve (the simpler the better) Looks like there are several choices, so which is the best? Here are some I've run across: PyOpenGL PyGame PyQT wxpython I don't have any particular experience with any of these, nor any strong desire to become an expert - I'm looking for the simplest solution. What do you recommend? [Update] For those not familiar with ACDSee, here's what it does that I care about: Simple list/thubmnail display of images in a directory Sort by name/size/type Ability to view images full screen Single-key delete while viewing full screen Move to next/previous image while viewing full screen Ability to select a group of images for: move to / copy to directory delete resize ACDSee has a bunch of niceties as well, such as remembering directories you've moved images to in the past, remembering your resize settings, displaying the total size of the images you've selected, etc. I've tried most of the options I could find (including Xee) and none of them quite get there. Please keep in mind that this is a programming/library question, not a criticism of any of the existing tools.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >