Search Results

Search found 13333 results on 534 pages for 'upper memory'.

Page 388/534 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Time complexity of a sorting algorithm

    - by Passonate Learner
    The two programs below get n integers from file and calculates the sum of ath to bth integers q(number of question) times. I think the upper program has worse time complexity than the lower, but I'm having problems calculating the time complexity of these two algorithms. [input sample] 5 3 5 4 3 2 1 2 3 3 4 2 4 [output sample] 7 5 9 Program 1: #include <stdio.h> FILE *in=fopen("input.txt","r"); FILE *out=fopen("output.txt","w"); int n,q,a,b,sum; int data[1000]; int main() int i,j; fscanf(in,"%d%d",&n,&q); for(i=1;i<=n;i++) fscanf(in,"%d",&data[i]); for i=0;i<q;i++) { fscanf(in,"%d%d",&a,&b); sum=0; for(j=a;j<=b;j++) sum+=data[j]; fprintf(out,"%d\n",sum); } return 0; } Program 2: #include <stdio.h> FILE *in=fopen("input.txt","r"); FILE *out=fopen("output.txt","w"); int n,q,a,b; int data[1000]; int sum[1000]; int main() { int i,j; fscanf(in,"%d%d",&n,&q); for(i=1;i<=n;i++) fscanf(in,"%d",&data[i]); for(i=1;i<=n;i++) sum[i]=sum[i-1]+data[i]; for(i=0;i<q;i++) { fscanf(in,"%d%d",&a,&b); fprintf(out,"%d\n",sum[b]-sum[a-1]); } return 0; } The programs below gets n integers from 1 to m and sorts them. Again, I cannot calculate the time complexity. [input sample] 5 5 2 1 3 4 5 [output sample] 1 2 3 4 5 Program: #include <stdio.h> FILE *in=fopen("input.txt","r") FILE *out=fopen("output.txt","w") int n,m; int data[1000]; int count[1000]; int main() { int i,j; fscanf(in,"%d%d",&n,&m); for(i=0;i<n;i++) { fscanf(in,"%d",&data[i]); count[data[i]]++ } for(i=1;i<=m;i++) { for(j=0;j<count[i];j++) fprintf(out,"%d ",i); } return 0; } It's ironic(or not) that I cannot calculate the time complexity of my own algorithms, but I have passions to learn, so please programming gurus, help me!

    Read the article

  • Use javac fork attribute with IBM JDK

    - by avjaz
    Hi - I have a large ant build that I'm working on, that is currently running out of memory. One ways I've read that can help mitigate this problem is to use javac fork="true" to run javac in a separate jvm. My problem is that I need to compile the project with the IBM JDK (this is not the JDK referenced by JAVA_HOME, and I would prefer it not to be). I tried setting the executable attribute of Ant's javac, to the path to IBM's javac but no joy (the project still won't compile). Ant's docs for the executable attribute state: Complete path to the javac executable to use in case of fork="yes". Defaults to the compiler of the Java version that is currently running Ant. Ignored if fork="no". Since Ant 1.6 this attribute can also be used to specify the path to the executable when using jikes, jvc, gcj or sj. Does anyone have any ideas? Thanks -

    Read the article

  • Entity Framework 4 relationship management in POCO Templates - More lazy than FixupCollection?

    - by Joe Wood
    I've been taking a look at EF4 POCO templates in beta 2. The FixupCollection looks fine for maintaining the model correctness after updating the relationship collection property (i.e. product.Orders it would set the order.Product reference ). But what about support for handling the scenario when some of those Order objects are removed from the context? The use-case of maintaining cascading deletes in the in-memory model. The old Typed DataSet model used to do this by performing the query through the container to derive the relationship results. Like the DataSet, this would require a reference to the ObjectContext inside the entity class so that it could query the top-level Order collection. Better support for Separation of Concerns in the ObjectContext would be required. It looks like EF is not suited to this use-case that DataSets did out of the box.... am I right?

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • MVVM Good Design. DataSet or a RowViewModel

    - by LnDCobra
    I have just started learning MVVM and having a dilemna. If I have a a main ViewModel and inside this model I have a number of datasets. Now should I be creating a new ViewModel for each row inside the dataset? Or expose the DataSet itself as a DependencyProperty? For now the dataset has about 20 rows inside it, and the thought of iterating through each row to create a ViewModel binding to each row.... might not be the best option for performance reasons and memory reasons in the future, like when there are 1000+ rows. Should I still go ahead and create a RowViewModel and iterate through the dataset? And have an ObservableCollection of it or just expose the dataset? Any help would be greatly appreciated.

    Read the article

  • com.sun.management.OperatingSystemMXBean use in an OSGi bundle

    - by Paul Whelan
    I have some legacy code that was used to monitor my applications cpu,memory etc that I want to convert to a bundle. Now when i start this bundle its complaining Missing Constraint: Import-Package: com.sun.management; version="0.0.0" I had used the OperatingSystemMXBean to get access to stats on the JVM. My question is can I use this class inside an OSGI container and if so how? Or should I use some other way to monitor my application. I was making an RMI call to the application from a web frontend to get the nodes performance figures pre OSGi.

    Read the article

  • Best practice to display POI in iPhone's MapKit?

    - by iamj4de
    Assuming I have a database of POI with their respective coordinates (longitude & latitude). What would be the "standard" way to display the POI as annotations around the user's current location? To elaborate: Given a zoom level, I guess I have to search through the database for all POI whose distance to the current location < a certain threshold, then create annotations for them. Or is there any smarter way? If the user zooms in/out, moves the map... I will need to redo the whole thing again? It seems that MapKit has a mechanism to cache/reuse annotations. Should I create a lot of them right away and let MapKit decides what to render when the visible region changes? I guess this would make the transition smoother, but also consumes more memory. What is your experience with this? Thanks.

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Is Updating double operation atomic

    - by Yan Cheng CHEOK
    In Java, updating double and long variable may not be atomic, as double/long are being treated as two separate 32 bits variables. http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html#28733 In C++, if I am using 32 bit Intel Processor + Microsoft Visual C++ compiler, is updating double (8 byte) operation atomic? I cannot find much specification mention on this behavior. When I say "atomic variable", here is what I mean : Thread A trying to write 1 to variable x. Thread B trying to write 2 to variable x. We shall get value 1 or 2 out from variable x, but not an undefined value.

    Read the article

  • NameValueCollection vs Dictionary<string,string>

    - by frankadelic
    Any reason I should use Dictionary<string,string instead of NameValueCollection? (in C# / .NET Framework) Option 1, using NameValueCollection: //enter values: NameValueCollection nvc = new NameValueCollection() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach(string key in nvc.AllKeys) { string value = nvc[key]; // do something } Option 2, using Dictionary<string,string... //enter values: Dictionary<string, string> dict = new Dictionary<string, string>() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach (KeyValuePair<string, string> kvp in dict) { string key = kvp.Key; string val = kvp.Value; // do something } For these use cases, is there any advantage to use one versus the other? Any difference in performance, memory use, sort order, etc.?

    Read the article

  • Core Data iPhone how often should I call [managedObjectContext save:&error] when doing 50k record in

    - by jamone
    I will be doing an occiasional import from XML into core data. I have around 50k entities that will be added. My question is how often should I call [managedObjectContext save:&error]. For every new entity added, or every x entities, or just at the end of the 50k import? I currently am calling it for each entity and tried only doing it for around every 10k and import speed went up drastically but after the first 30k it would crash with: *** Terminating app due to uncaught exception 'NSGenericException', reason: '*** Collection <NSCFSet: 0x13e760> was mutated while being enumerated.' Before I spend too much time trying to diagnose what is going on there I figured I'd check if its ok to not call save on every entity? Is the # of entities before calling save limited by the amount of memory those entities are using?

    Read the article

  • Windows System Tray icons - controlling position

    - by Jamo
    I have a few old apps I've written (in Delphi) which for various reasons use a system tray icon. Most are using AppControls TacTrayIcon or some other similar component. Here's my question: How does one control the position of a tray icon? (i.e. where it is, say, relative to the system time -- 1st position/"slot", 2nd position/"slot", etc). I recall seeing a demo (C#, if memory serves) that allowed the user to "shift icon to the left" and "shift icon to the right", but don't recall how it was done. I'd like to allow the user to select what position they want to icon to appear in, for Windows 2000 - Windows 7. (I understand Windows 7 handles system tray stuff a little differently, but haven't tested that out yet). Thanks for any and all help.

    Read the article

  • What's the best way to create a Windows Mobile application with multiple screens in C#

    - by Joseph Earl
    I am creating a Windows Mobile Application in C# and Visual Studio 2008. The application will have 5-6 main 'screens'. There will also be bar (/area) with information (e.g. a title, whether the app is busy, etc) above the screens, and a toolbar (or similar control) below the screens with 5-6 buttons (with images) to change the active screen (i.e. the screens will share the top bar and toolbar) What is the best way to implement this? Use multiple forms, and just include the toolbar and top-bar in each Use a single form and something like the Tab control (but customised) to contain the screens Something else? Keeping in mind a) memory usage and b) time to switch screens. Thanks in advance. Any links, pointers etc are much appreciated.

    Read the article

  • Passing a SAFEARRAY from C# to COM

    - by SlavaGu
    I use 3rd party COM to find faces in a picture. One of the methods has the following signature, from SDK: long FindMultipleFaces( IUnknown* pIDibImage, VARIANTARG* FacePositionArray ); Parameters: pIDibImage[in] - The image to search. FacePositionArray[out]- The array of FacePosition2 objects into which face information is placed. This array is in a safe array (VARIANT) of type VT_UNKNOWN. The size of the array dictates the maximum number of faces for which to search. which translates into the following C# method signature (from metadata): int FindMultipleFaces(object pIDibImage, ref object pIFacePositions); Being optimistic I call it the following way but get an exception that the memory is corrupt. The exception is thrown only when a face is present in the image. FacePosition2[] facePositions = new FacePosition2[10]; object positions = facePositions; int faceCount = FaceLocator.FindMultipleFaces(dibImage, ref positions); What's the right way to pass SAFEARRAY to unmanaged code?

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • Code Golf - Banner Generation

    - by Claudiu
    When thanking someone, you don't want to just send them an e-mail saying "Thanks!", you want to have something FLASHY: Input: THANKS!! Output: TTT H H AAA N N K K SSS !!! !!! T H H A A NNN K K S !!! !!! T HHH AAA NNN KK SSS !!! !!! T H H A A N N K K S T H H A A N N K K SSS !!! !!! Write a program to generate a banner. You only have to generate upper-case A-Z along with spaces and exclamation points (what is a banner without an exclamation point?). All characters are made up of a 3x5 grid of the same character (so the S is a 3x5 grid made of S). All output should be on one row (so no newlines). Here are all the letters you need: Input: ABCDEFGHIJKL Output: AAA BBB CCC DD EEE FFF GGG H H III JJJ K K L A A B B C D D E F G H H I J K K L AAA BBB C D D EE FF G G HHH I J KK L A A B B C D D E F G G H H I J J K K L A A BBB CCC DD EEE F GGG H H III JJJ K K LLL Input: MNOPQRSTUVWX Output: M M N N OOO PPP QQQ RR SSS TTT U U V V W W X X MMM NNN O O P P Q Q R R S T U U V V W W X M M NNN O O PPP Q Q RR SSS T U U V V WWW X M M N N O O P QQQ R R S T U U V V WWW X M M N N OOO P QQQ R R SSS T UUU V WWW X X Input: YZ! Output: Y Y ZZZ !!! Y Y Z !!! YYY Z !!! Y Z YYY ZZZ !!! The winner is the shortest source code. Source code should read input from stdin, output to stdout. You can assume input will only contain [A-Z! ]. If you insult the user on incorrect input, you get a 10 character discount =P. I was going to require these exact 27 characters, but to make it more interesting, you can choose how you want them to look - whatever makes your code shorter! To prove that your letters do look like normal letters, show the output of the last three runs.

    Read the article

  • Doubt about the Intel's IA-32 software developer manual

    - by Francesco Turco
    I'm studying the Intel's IA-32 software developer manual. In particular, I'm reading the following manual: http://www.intel.com/Assets/PDF/manual/253666.pdf. Let's take for example the ADD instruction. On page 79 it is written that you can an r8 (8-bit register) to an r/m8 (8-bit register or memory location). A few rows below, it is also written that you can add an r/m8 to an r8. The question is: if I add two 8-bit registers, which instruction I am using? Thanks.

    Read the article

  • iOS Efficiency File Saving Efficiency

    - by Guvvy Aba
    I was working on my iOS app and my goal is to save a file that I am receiving from the internet bit by bit. My current setup is that I have a NSMutableData object and I add a bit of data to it as I receive my file. After the last "packet" is received, I write the NSData to a file and the process is complete. I'm kind of worried that this isn't the ideal way to do it because of the limitations of RAM in a mobile device and it would be problematic to receive large files. My next thought was to implement a NSFileHandle so that as the file arrives, it would be saved to the disk, rather than the virtual memory. In terms of speed and efficiency, which method do you think will work decently on an iOS device. I am currently using the first, NSMutableData approach. Is it worth changing my app to use the NSFileHandle? Thanks in advance, Guvvy

    Read the article

  • Can't change data type on MS Access 2007

    - by khalidfazeli
    Hi All, I have a huge database (800MB) which consists of a field called 'Date Last Modified' at the moment this field is entered as a text data type but need to change it to a Date/Time field to carry out some queries. I have another exact same database but with only 35MB of data inside it and when I change the data type it works fine, but when I try to change data type on big database it gives me an error: Micorosoft Office Access can't change the data type. There isn't enough disk space or memory After doing some research some sites mentioned of changing the registry file (MaxLocksPerFile) tried that as well, but no luck :-( Can anyone help please?

    Read the article

  • Parsing Wiki XML Dumps ver0.4 just got tough

    - by syed
    Hello, I am trying to parse Wikipedia XML Dump using "Parse-MediaWikiDump-1.0.4" along with "Wikiprep.pl" script. I guess this script works fine with ver0.3 Wiki XML Dumps but not with the latest ver0.4 Dumps. I get the following error. Can't locate object method "page" via package "Parse::MediaWikiDump::Pages" at wikiprep.pl line 390. Also, under the "Parse-MediaWikiDump-1.0.4" documentation @ http://search.cpan.org/~triddle/Parse-MediaWikiDump-1.0.4/lib/Parse/MediaWikiDump/Pages.pm, I read "LIMITATIONS Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files." Any work arounds would help me get to the next level. Note: one may wonder why cannot we directly use SAX or STAX parser instead, wikipedia dump is a 25GB plus single file, stack/memory issues are obvious. Hence, the above perl script resolves this issue but currently I am stuck with this version problem.

    Read the article

  • iPhone app running in Instruments fails with unrecognized selector

    - by Mark Smith
    I have an app that appears to run without problems in normal use. The Clang Static Analyzer reports no problems either. When I try to run it in Instruments, it fails with an unrecognized selector exception. The offending line is a simple property setter of the form: self.bar = baz; To figure out what's going on, I added an NSLog() call immediately above it: NSLog(@"class = %@ responds = %d", [self class], [self respondsToSelector:@selector(setBar:)]); self.bar = baz; On the emulator (without Instruments) and on a device, this shows exactly what I'd expect: class = Foo responds = 1 When running under Instruments, I get: class = Foo responds = 0 I'm stumped as to what could cause this. Perhaps a different memory location is getting tromped on when it's in the Instruments environment? Can anyone suggest how I might debug this?

    Read the article

  • Using memcpy to copy managed structures

    - by Haris Hasan
    Hi, I am working in mixed mode (managed C++ and C++ in one assembly). I am in a situation something like this. ManagedStructure ^ managedStructure = gcnew ManagedStructure(); //here i set different properties of managedStructure then I call "Method" given below and pass it "& managedStructure" Method(void *ptrToStruct) { ManagedStructure ^ managedStructure2 = gcnew ManagedStructure(); memcpy(&managedStructure2 , ptrToStruct, sizeof(managedStructure2 )); } I have following question about this scenario. 1) Is it safe to use memcpy like this? and if not what is its alternate to achieve same functionality? ( I can't change "Method" definition) 2) I am not freeing any memory as both the structures are managed. Is it fine?

    Read the article

  • C++ debugging help for C# programmer

    - by ddm
    I'm embarrassed to post this but it's been awhile since I worked in C++, been with C# for awhile. I'm converting old (not written by me) vs2003 and 05 C++ code to vs 08. In addition to lots of lumps during conversion, I want to add debug logging so I can monitor what is going on when I attach with windbg. I've searched the archives here and ms and I think it's using Debugger.Log(...) but not sure. I also remember years ago launching a debug monitor to catch the logging. So the call to some experts that have a better memory than I. What call(s) can I make (without the DEBUG compile directive - need to watch release code) to catch the logging in wind bag? I followed a couple of debugging links from SO posts but they were dead. Thanx - Old Man.

    Read the article

  • Is it possible to stream a file to and from servers without holding the data in ram?

    - by Chris Denman
    Hi everyone! PHP question (new to PHP after 10 years of Perl, arrggghhh!). I have a 100mb file that I want to send to another server. I have managed to read the file in and "post" it without curl (cannot use curl for this app). Everything is working fine on smaller files. However, with the larger files, PHP complains about not being able to allocate memory. Is there a way to open a file, line by line, and send it as a post ALSO line by line? This way nothing is held in ram thus preventing my errors and getting around strict limitations. Chris

    Read the article

  • Thread and two dimensional array in objective C?

    - by mactonny
    Hey, guys, I am just starting to wrap my head around objective C and I am doing a little project on Iphone. And I just encountered a weird problem. I had to deal with images in my program so I have a lot local variables declared like temp[width][height]. If I am not using NSThread to perform image processing, it works all fine. However, if I use NSThread, it'll keep giving me EXC_BAD_ACCESS on whenever I try to access a 2-D array declared like temp[widht][height]. So I have to allocate memory from heap in order to have a 2-D array. That'll solve the problem but I still don't get it. My first thought would be stack over flow, but it worked all fine with one thread. I just don't get it.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >