Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 191/916 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Does NULL and nil are equal?

    - by monish
    Hi Guys, Actually my question here is does Null and nil are equal or not? I had an Example but I am confused when they are equal when they are not. NSNull *nullValue = [NSNull null]; NSArray *arrayWithNull = [NSArray arrayWithObject:nullValue]; NSLog(@"arrayWithNull: %@", arrayWithNull); id aValue = [arrayWithNull objectAtIndex:0]; if (aValue == nil) { NSLog(@"equals nil"); } else if (aValue == [NSNull null]) { NSLog(@"equals NSNull instance"); if ([aValue isEqual:nil]) { NSLog(@"isEqual:nil"); } } Here in the above case it shows that both Null and nil are not equal and it displays "equals NSNull instance" NSString *str=NULL; id str1=nil; if(str1 == str) { printf("\n IS EQUAL........"); } else { printf("\n NOT EQUAL........"); } And in the second case it shows both are equal and it displays "IS EQUAL". Anyone's help will be much appreciated. Thank you, Monish.

    Read the article

  • Django + jQuery: Sometimes AJAX, but always DRY?

    - by Justin Myles Holmes
    Let's say I have an app (in Django) for which I want to sometimes (but not always) load content via ajax. An easy example is logging in. When the user logs in, I don't want to refresh the page, just change things around. Yet, if they are already logged in, and then arrive at (or refresh) the same page, I want it to show the same content. So, in the first case, obviously I do some sort of ajax login and load changes to the page accordingly. Easy enough. But what about in the second case? Do I go back through and add {% if user.authenticated %} all over the place? This seems cold, dark, and WET. On the other hand, I can just wrap all the ajaxy stuff in a javascript function, called loggedIn(), and run that if the user is authenticated. But then I'm faced with two http requests instead of one. Also undesirable. So what's the standard solution here?

    Read the article

  • How can I work around the fact that in C++, sin(M_PI) is not 0?

    - by Adam Doyle
    In C++, const double Pi = 3.14159265; cout << sin(Pi); // displays: 3.58979e-009 it SHOULD display the number zero I understand this is because Pi is being approximated, but is there any way I can have a value of Pi hardcoded into my program that will return 0 for sin(Pi)? (a different constant maybe?) In case you're wondering what I'm trying to do: I'm converting polar to rectangular, and while there are some printf() tricks I can do to print it as "0.00", it still doesn't consistently return decent values (in some cases I get "-0.00") The lines that require sin and cosine are: x = r*sin(theta); y = r*cos(theta); BTW: My Rectangular - Polar is working fine... it's just the Polar - Rectangular Thanks! edit: I'm looking for a workaround so that I can print sin(some multiple of Pi) as a nice round number to the console (ideally without a thousand if-statements) edit: In case anyone's curious, this was what I landed on: double sin2(double theta) // in degrees { double s = sin(toRadians(theta)); if (fabs(s - (int)s) < 0.000001) { return floor(s + 0.5); } return s; } where toRadians() is a macro that converts to radians

    Read the article

  • C++ Returning a Reference

    - by Devil Jin
    Consider the following code where I am returning double& and a string&. The thing works fine in the case of a double but not in the case of a string. Why is this difference in the behavior? In both the cases compiler does not even throws the Warning: returning address of local variable or temporary as I am returning a reference. #include <iostream> #include <string> using namespace std; double &getDouble(){ double h = 46.5; double &hours = h; return hours; } string &getString(){ string str = "Devil Jin"; string &refStr = str; return refStr; } int main(){ double d = getDouble(); cout << "Double = " << d << endl; string str = getString(); cout << "String = " << str.c_str() << endl; return 0; } Output: $ ./a.exe Double = 46.5 String =

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • What is faster- Java or C# (Or good old C)?

    - by Rexsung
    I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intels compiler on Core2 Quad CPU's. It's mostly integer arithmetic. My benchmarks so far show Java and C are about on par with each other, and dotNET/C# trails by about 5%- however a number of my coworkers are claiming that dotNET with the right optimizations will beat both of these given enough time for the JIT to do its work. I always assume that the JIT would have done it's job within a few minutes of the app starting (Probably a few seconds in my case, as it's mostly tight loops), so I'm not sure whether to believe them Can anyone shed any light on the situation? Would dotNET beat Java? (Or am I best just sticking with C at this point?). The code is highly multithreaded and data sets are several terabytes in size. Haskell/erlang etc are not options in this case as there is a significant quantity of existing legacy C code that will be ported to the new system, and porting C to Java/C# is a lot simpler than to Haskell or Erlang. (Unless of course these provide a significant speedup). Edit: We are considering moving to C# or Java because they may, in theory, be faster. Every percent we can shave off our processing time saves us tens of thousands of dollars per year. At this point we are just trying to evaluate whether C, Java, or c# would be faster.

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • how to use Remote Service ?

    - by LEE YONGGUN
    Hi im trying to use Remote Service btween two simple application, But its not easy to me. So any advice you have will help me. here`s my case. I made one app which is playing Music in service, There are two components. one is Activity controlling service by using three buttons, play,pause, stop. and it is working fine. and another one is just simple Activity which also has four buttons bind,play,stop,unbind. when i click bind, it`s confirmed by Toast msg, but when i click play button,it occurs error. i want to control first activitys Music playing service in second Activity. So im trying to use remote service. i made same .aidl file in each app project. In aidl file, i defined methods "playing","stoping" and i implement those methods in Music service class, implementation is simply use intent and startService & stopService. In DDMS there is "java.lang.SecurityException : Binder invocation to an incorrect interface" thats the case what im doing. So please tell me what`s the problem. any advice could help me. thanks Gun.

    Read the article

  • Java the little console game won't repeat?

    - by Jony Kale
    Okay, what I have so far is: You enter the game, and write "spin" to the console. Program will enter the while loop. In the while loop, if entered int is -1, return to the back (Set console input back to "", and let the user select what game he would like to play). Problem: Instead of going back, and selecting "spin" again, the program exits? Why is it happening? How can I fix this? private static Scanner console = new Scanner(System.in); private static Spin spin = new Spin(); private static String inputS = ""; private static int inputI = 0; private static String[] gamesArray = new String[] {"spin", "tof"}; private static boolean spinWheel = false; private static boolean tof = false; public static void main (String[] args) { if (inputS.equals("")) { System.out.println("Welcome to the system!"); System.out.print("Please select a game: "); inputS = console.nextLine(); } while (inputS.equals("spin")) { System.out.println("Welcome to the spin game! Please write 1 to spin. and -1 to exit back"); inputI = console.nextInt(); switch (inputI) { case 1: break; case -1: inputI = 0; inputS = ""; break; } } }

    Read the article

  • Windows Messages Bizarreness

    - by jameszhao00
    Probably just a gross oversight of some sort, but I'm not receiving any WM_SIZE messages in the message loop. However, I do receive them in the WndProc. I thought the windows loop gave messages out to WndProc? LRESULT CALLBACK WndProc( HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam ) { switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; case WM_SIZE: return 0; break; } printf("wndproc - %i\n", message); // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } ... and now the message loop... while(TRUE) { // Check to see if any messages are waiting in the queue if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // translate keystroke messages into the right format TranslateMessage(&msg); // send the message to the WindowProc function DispatchMessage(&msg); // check to see if it's time to quit if(msg.message == WM_QUIT) { break; } if(msg.message == WM_SIZING) { printf("loop - resizing...\n"); } } else { //do other stuff } }

    Read the article

  • WM_NCHITTEST and secondary monitor to left of primary monitor

    - by AlanKley
    The described setup with 2nd monitor to left of primary causes WM_NCHITTEST to send negative values which is apparently not supported according to this post. I have a custom control written in win32 that is like a Group control. It has a small clickable area. No MOUSE events are coming to my control when the window containing the custom control lies on a second monitor to the left of the primary monitor. SPY++ shows WM_NCHITTEST messages but no Mouse messages. When window is moved to primary monitor or secondary monitor is positioned to right of primary (all points are positive) then everything works fine. Below is how the WM_NCHITTEST is handled in my custom control. In general I need it to return HTTRANSPARENT so as not to obscure other controls placed inside of it. Anybody have any suggestions what funky coordinate translation I need to do and what to return in response to WM_NCHITTEST to get Mouse messages translated and sent to my control in the case where it is on a 2nd monitor placed to the left of the primary monitor? case WM_NCHITTEST: { POINT Pt = {LOWORD(lP), HIWORD(lP)}; int i; ScreenToClient (hWnd, &Pt); if (PtInRect (&rClickableArea, Pt)) { return(DefWindowProc( hWnd, Msg, wP, lP )); } } lReturn = HTTRANSPARENT; break;

    Read the article

  • Core data migration failing with "Can't find model for source store" but managedObjectModel for source is present

    - by Ira Cooke
    I have a cocoa application using core-data, which is now at the 4th version of its managed object model. My managed object model contains abstract entities but so far I have managed to get migration working by creating appropriate mapping models and creating my persistent store using addPersistentStoreWithType:configuration:options:error and with the NSMigratePersistentStoresAutomaticallyOption set to YES. NSDictionary *optionsDictionary = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; NSURL *url = [NSURL fileURLWithPath: [applicationSupportFolder stringByAppendingPathComponent: @"MyApp.xml"]]; NSError *error=nil; [theCoordinator addPersistentStoreWithType:NSXMLStoreType configuration:nil URL:url options:optionsDictionary error:&error] This works fine when I migrate from model version 3 to 4, which is a migration that involves adding attributes to several entities. Now when I try to add a new model version (version 5), the call to addPersistentStoreWithType returns nil and the error remains empty. The migration from 4 to 5 involves adding a single attribute. I am struggling to debug the problem and have checked all the following; The source database is in fact at version 4 and the persistentStoreCoordinator's managed object model is at version 5. The 4-5 mapping model as well as managed object models for versions 4 and 5 are present in the resources folder of my built application. I've tried various model upgrade paths. Strangely I find that upgrading from an early version 3 - 5 works .. but upgrading from 4 - 5 fails. I've tried adding a custom entity migration policy for migration of the entity whose attributes are changing ... in this case I overrode the method beginEntityMapping:manager:error: . Interestingly this method does get called when migration works (ie when I migrate from 3 to 4, or from 3 to 5 ), but it does not get called in the case that fails ( 4 to 5 ). I'm pretty much at a loss as to where to proceed. Any ideas to help debug this problem would be much appreciated.

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • NSArray/NSMutableArray : Passed by ref or by value???

    - by wgpubs
    Totally confused here. I have a PARENT UIViewController that needs to pass an NSMutableArray to a CHILD UIViewController. I'm expecting it to be passed by reference so that changes made in the CHILD will be reflected in the PARENT and vice-versa. But that is not the case. Both have a property declared as .. @property (nonatomic, retain) NSMutableArray *photos; Example: In PARENT: self.photos = [[NSMutableArray alloc] init]; ChildViewController *c = [[ChildViewController alloc] init ...]; c.photos = self.photos; ... ... ... In CHILD: [self.photos addObject:obj1]; [self.photos addObject:obj2]; NSLog(@"Count:%d", [self.photos count]) // Equals 2 as expected ... Back in PARENT: NSLog(@"Count:%d", [self.photos count]) // Equals 0 ... NOT EXPECTED I thought they'd both be accessing the same memory. Is this not the case? If it isn't ... how do I keep the two NSMutableArrays in sync?

    Read the article

  • Phonegap/Cordova geolocation not working on Android

    - by Kreeki
    I'm having a trouble to get Geolocation working on Android in both emulator (even when I geo fix over telnet) and on device. Works on iOS, WP8 and in the browser. When I ask device for location using the following code, I always get an error (in my case custom Retrieving your position failed for unknown reason. with null both error code and error message). Related code: successHandler = (position) -> resolve App.Location.create lat: position.coords.latitude lng: position.coords.longitude errorHandler = (error) -> error = switch error.code when 1 App.LocationError.create message: 'You haven\'t shared your location.' when 2 App.LocationError.create message: 'Couldn\'t detect your current location.' when 3 App.LocationError.create message: 'Retrieving your position timeouted.' else App.LocationError.create message: 'Retrieving your position failed for unknown reason. Error code: ' + error.code + '. Error message: ' + error.message reject(error) options = maximumAge: Infinity # I also tried with 0 timeout: 60000 enableHighAccuracy: true navigator.geolocation.getCurrentPosition(successHandler, errorHandler, options) platforms/android/AndroidManifest.xml <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> www/config.xml (just in case) <feature name="Geolocation"> <param name="android-package" value="org.apache.cordova.GeoBroker" /> </feature> Using Cordova 3.1.0. Testing on Android 4.2. Plugin installed. Cordova.js included in index.html (other plugins like InAppBrowser are working fine). $ cordova plugins ls [ 'org.apache.cordova.console', 'org.apache.cordova.device', 'org.apache.cordova.dialogs', 'org.apache.cordova.geolocation', 'org.apache.cordova.inappbrowser', 'org.apache.cordova.vibration' ] I'm clueless. Am I missing something?

    Read the article

  • Flex AS3: ComboBox set visible to false doesn't hide

    - by jolierouge
    I have a combobox in a view that receives information about application state changes, and then is supposed to show or hide it's children based on the whole application state. It receives state change messages, it traces the correct values, it does what it's supposed to do, however, it just doesn't seem to work. Essentially, all it needs to do is hide a combobox during one state, and show it again during another state. Here is the code: public function updateState(event:* = null):void { trace("Project Panel Updating State"); switch(ApplicationData.getSelf().currentState) { case 'login': this.visible = false; break; case 'grid': this.visible = true; listProjects.includeInLayout = false; listProjects.visible = false; trace("ListProjects: " + listProjects.visible); listLang.visible = true; break; default: break; } } Here is the MXML: <mx:HBox> <mx:Button id="btnLoad" x="422" y="84" label="Load" enabled="true" click="loadProject();"/> <mx:ComboBox id="listProjects" x="652" y="85" editable="true" change="listChange()" color="#050CA8" fontFamily="Arial" /> <mx:Label x="480" y="86" text="Language:" id="label3" fontFamily="Arial" /> <mx:ComboBox id="listLang" x="537" y="84" editable="true" dataProvider="{langList}" color="#050CA8" fontFamily="Arial" width="107" change="listLangChange(event)"/> <mx:CheckBox x="830" y="84" label="Languages in English" id="langCheckbox" click='toggleLang()'/> </mx:HBox>

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • Printing HTML blocks

    - by Lem0n
    I have a page with a few tables that I want to be printable. I want the following behaviour: 1) Add a page break if the next table fits in a single page, but won't fit in the current page (because of other stuff already printed on this page) 2) Print the "table header" again in case it's needed to break a table (I guess it's the default behaviour) Any ideas specially on the first issue? Maybe some CSS can help? I'll give on example. I have a page with 4 tables. All of them with 10 lines, except the third one, with 50 lines. The first and second goes on the first page. Since the third one won't fit in the same page, but will fit in a page alone, it's printed on a page alone... and then the forth table is printed on the third page (in case it doesn't fit together in the second page). But, if the third page had 300 lines and would be broke anyway, it could have started to be print in the first page.

    Read the article

  • Java method keyword "final" and its use

    - by Lukas Eder
    When I create complex type hierarchies (several levels, several types per level), I like to use the final keyword on methods implementing some interface declaration. An example: interface Garble { int zork(); } interface Gnarf extends Garble { /** * This is the same as calling {@link #zblah(0)} */ int zblah(); int zblah(int defaultZblah); } And then abstract class AbstractGarble implements Garble { @Override public final int zork() { ... } } abstract class AbstractGnarf extends AbstractGarble implements Gnarf { // Here I absolutely want to fix the default behaviour of zblah // No Gnarf shouldn't be allowed to set 1 as the default, for instance @Override public final int zblah() { return zblah(0); } // This method is not implemented here, but in a subclass @Override public abstract int zblah(int defaultZblah); } I do this for several reasons: It helps me develop the type hierarchy. When I add a class to the hierarchy, it is very clear, what methods I have to implement, and what methods I may not override (in case I forgot the details about the hierarchy) I think overriding concrete stuff is bad according to design principles and patterns, such as the template method pattern. I don't want other developers or my users do it. So the final keyword works perfectly for me. My question is: Why is it used so rarely in the wild? Can you show me some examples / reasons where final (in a similar case to mine) would be very bad?

    Read the article

  • convincing C# compiler that execution will stop after a member returns

    - by Sarah Vessels
    I don't think this is currently possible or if it's even a good idea, but it's something I was thinking about just now. I use MSTest for unit testing my C# project. In one of my tests, I do the following: MyClass instance; try { instance = getValue(); } catch (MyException ex) { Assert.Fail("Caught MyException"); } instance.doStuff(); // Use of unassigned local variable 'instance' To make this code compile, I have to assign a value to instance either at its declaration or in the catch block. However, Assert.Fail will never, to the best of my knowledge, allow execution to proceed past it, hence instance will never be used without a value. Why is it then that I must assign a value to it? If I change the Assert.Fail to something like throw ex, the code compiles fine, I assume because it knows that exception will disallow execution to proceed to a point where instance would be used uninitialized. So is it a case of runtime versus compile-time knowledge about where execution will be allowed to proceed? Would it ever be reasonable for C# to have some way of saying that a member, in this case Assert.Fail, will never allow execution after it returns? Maybe that could be in the form of a method attribute. Would this be useful or an unnecessary complexity for the compiler?

    Read the article

  • How should I read from a buffered reader?

    - by Roman
    I have the following example of reading from a buffered reader: while ((inputLine = input.readLine()) != null) { System.out.println("I got a message from a client: " + inputLine); } The code in the loop println will be executed whenever something appears in the buffered reader (input in this case). In my case, if a client-application writes something to the socket, the code in the loop (in the server-application) will be executed. But I do not understand how it works. inputLine = input.readLine() waits until something appears in the buffered reader and when something appears there it returns true and the code in the loop is executed. But when null can be returned. There is another question. The above code was taken from a method which throws Exception and I use this code in the run method of the Thread. And when I try to put throws Exception before the run the compiler complains: overridden method does not throw exception. Without the throws exception I have another complain from the compiler: unreported exception. So, what can I do?

    Read the article

  • Retrieving Json via HTML request from Jboss server

    - by Seth Solomon
    I am running into a java.net.SocketException: Unexpected end of file from server when I am trying to query some JSON from my JBoss server. I am hoping someone can spot where I am going wrong. Or does anyone have any suggestions of a better way to pull this JSON from my Jboss server? try{ URL u = new URL("http://localhost:9990/management/subsystem/datasources/data-source/MySQLDS/statistics?read-resource&include-runtime=true&recursive=true"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); String encoded = Base64.encode(("username"+":"+"password").getBytes()); c.setRequestMethod("POST"); c.setRequestProperty("Authorization", "Basic "+encoded); c.setRequestProperty("Content-Type","application/json"); c.setUseCaches(false); c.setAllowUserInteraction(false); c.setConnectTimeout(5000); c.setReadTimeout(5000); c.connect(); int status = c.getResponseCode(); // throws the exception here switch (status) { case 200: case 201: BufferedReader br = new BufferedReader(new InputStreamReader(c.getInputStream())); StringBuilder sb = new StringBuilder(); String line; while ((line = br.readLine()) != null) { sb.append(line+"\n"); } br.close(); System.out.println(sb.toString()); break; default: System.out.println(status); break; } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • Launch .jar files with command line arguments (but with no console window)

    - by Virat Kadaru
    I have to do a demo of an application, the application has a server.jar and client.jar. Both have command line arguments and are executable. I need to launch two instances of server.jar and two instances of client.jar. I thought that using a batch file was the way to go, but, the batch file executes the first command (i.e. server.bat [argument1] [argument2]) and does not do anything else unless I close the first instance, in which case it then runs the 2nd command. And also the I do not want a blank console window to open (or be minimized) What I really need is a batch script that will just launch these apps without any console windows and launch all instances that I need. Thanks in Advance! EDIT: javaw: works if I type the command into the console window individually. If I put the same in the batch file, it will behave as before. Console window opens, one instance starts (whichever was first) and it does not proceed further unless I close the application in which case it runs the 2nd command. I want it to run all commands silently SOLUTION: Found the solution, below is the contents of my batch file @echo off start /B server.jar [arg1] [arg2] start /B server.jar [arg3] [arg4] @echo on this opens, runs all the commands and closes the window, does not wait for the command to finish.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • QUnit Unit Testing: Test Mouse Click

    - by Ngu Soon Hui
    I have the following HTML code: <div id="main"> <form Id="search-form" action="/ViewRecord/AllRecord" method="post"> <div> <fieldset> <legend>Search</legend> <p> <label for="username">Staff name</label> <input id="username" name="username" type="text" value="" /> <label for="softype"> software type</label> <input type="submit" value="Search" /> </p> </fieldset> </div> </form> </div> And the following Javascript code ( with JQuery as the library): $(function() { $("#username").click(function() { $.getJSON("ViewRecord/GetSoftwareChoice", {}, function(data) { // use data to manipulate other controls }); }); }); Now, how to test $("#username").click so that for a given input, it calls the correct url ( in this case, its ViewRecord/GetSoftwareChoice) And, the output is expected (in this case, function(data)) behaves correctly? Any idea how to do this with QUnit? Edit: I read the QUnit examples, but they seem to be dealing with a simple scenario with no AJAX interaction. And although there are ASP.NET MVC examples, but I think they are really testing the output of the server to an AJAX call, i.e., it's still testing the server response, not the AJAX response. What I want is how to test the client side response.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >