Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 627/2933 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • mysql row locking via php

    - by deezee
    I am helping a friend with a web based form that is for their business. I am trying to get it ready to handle multiple users. I have set it up so that just before the record is displayed for editing I am locking the record with the following code. $query = "START TRANSACTION;"; mysql_query($query); $query = "SELECT field FROM table WHERE ID = \"$value\" FOR UPDATE;"; mysql_query($query); (okay that is greatly simplified but that is the essence of the mysql) It does not appear to be working. However, when I go directly to mysql from the command line, logging in with the same user and execute START TRANSACTION; SELECT field FROM table WHERE ID = "40" FOR UPDATE; I can effectively block the web form from accessing record "40" and get the timeout warning. I have tried using BEGIN instead of START TRANSACTION. I have tried doing SET AUTOCOMMIT=0 first and starting the transaction after locking but I cannot seem to lock the row from the PHP code. Since I can lock the row from the command line I do not think there is a problem with how the database is set up. I am really hoping that there is some simple something that I have missed in my reading. FYI, I am developing on XAMPP version 1.7.3 which has Apache 2.2.14, MySQL 5.1.41 and PHP 5.3.1. Thanks in advance. This is my first time posting but I have gleaned alot of knowledge from this site in the past.

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • Maven: trying to get my submodule's poms to NOT inherit a plugin in the parent

    - by jobrahms
    My project has a parent pom and several submodule poms. I've put a plugin in the parent that is responsible for building our installer distributables (using install4j). It doesn't make sense to have this plugin run on the submodules, so I've put false in the plugin's config, as seen below. The problem is, when I run mvn clean install install4j:compile it cleans, compiles, and runs the install4j plugin on the parent, but then it tries to run it on the child modules and crashes. Here's the plugin config <plugin> <groupId>com.google.code.maven-install4j</groupId> <artifactId>maven-install4j-plugin</artifactId> <version>0.1.1</version> <inherited>false</inherited> <configuration> <executable>${devenv.install4jc}</executable> <configFile>${basedir}/newinstaller/ehd-demo.install4j</configFile> <releaseId>${project.version}</releaseId> <attach>false</attach> <skipOnMissingExecutable>true</skipOnMissingExecutable> </configuration> </plugin> Am I misunderstanding the purpose of inherited=false? What is the correct way to get this to work? I'm using maven 2.2.0.

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • JDBC call not executing

    - by dbyrne
    I am working on one of the DAOs for a medium sized web application. Unfortunately, it contains very convoluted logic, and makes hundreds of JDBC stored proc calls in loops. This is out of my control. I am working on a method inside the DAO which makes a single JDBC call. The simplified version of what this method looks like is this: DriverManager.registerDriver(new com.sybase.jdbc2.jdbc.SybDriver()); Connection con = DriverManager.getConnection((String)connectionDetails.get("DATABASE_URL") (String)connectionDetails.get("USERID"), (String)connectionDetails.get("PASSWORD")); String sqlToExecute = "{call " + STORED_PROC + "(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"; CallableStatement stmt = con.prepareCall(sqlToExecute); //Maybe I should try calling clearParameters here? stmt.setString(1,someData); //....Set of parameters.... if (!stmt.execute()) { //execute method never returns false } stmt.close(); Its pretty much a textbook JDBC call. All this stored proc does is insert a single row. Here is where things get crazy: This code works when you run it through a debugger line by line, but fails when you run it "full speed". Not only does it fail, but it doesn't throw any exception! The execute method always returns true. It just breezes right through the JDBC call without inserting a row to the database. If you go through the log files, copy the stored proc call and run it manually, it works (just like it does in debug mode). Whats strange is that the rest of the DAO, with all its hundreds of looped stored proc calls, works fine. My thinking is that Connection or CallableStatement is caching some value behind the scenes that is screwing things up. Has anyone ever seen anything like this before? A JDBC call failing with no exceptions? I know it will be impossible to provide a complete solution to this without seeing the whole application, I am just looking for suggestions on possible issues to investigate.

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Why does my Jabber bot only work if I'm debugging my Perl script?

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the Perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to Perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • Visual studio feature - commenting code Ctrl K - Ctrl C

    - by Michael
    I commented on this answer some time ago regarding how visual studio comments out code with // or /* */. I was thinking to revise the answer (to include my findings) but I had to test it first, which kind of confused me. My finding is that depending on where your marker is when you press Ctrl - K, Ctrl - C you will get either // or /* */. So I tried it out on the following code: [1] [2]FD_ZERO(&mFSet); FD_SET(user->mSender, &mFSet); timeval zeroTime = { 0, 0 }; int sel = select(0, NULL, &mFSet, NULL, &zeroTime); [3] if (sel == SOCKET_ERROR){ [5]return false; [4] } if (sel == 0){ [6] return false; [7] } The [x] is markerpositions. All [1] positions give // for all marked lines. However Start position End Position: gives: [2] [3] : /* */ [2] [4] : // [2] [5] : /* */ [2] [more than 5] : // [5] [6] : /* */ [5] [7] : // I guess it has to do with forward indentation (not backwards), that whenever code is indented more than the starting line you get // except when you haven't selected any text on the indented line ([2] [5]). But why the distinction? Why not use /* */ for when you start at [2] and // when you start at [1]?

    Read the article

  • NSTimer Reset Not Working

    - by user355900
    hi, i have a nstimer and it works perfectly counting down from 2:00 but when i hit the reset button it does not work it just stops the timer and when i press start again it will carry on with the timer as if it had never been stopped. Here is my code `@implementation TimerAppDelegate @synthesize window; (void)applicationDidFinishLaunching:(UIApplication *)application { timerLabel.text = @"2:00"; seconds = 120; // Override point for customization after application launch [window makeKeyAndVisible]; } (void)viewDidLoad { [timer invalidate]; } (void)countDownOneSecond { seconds--; int currentTime = [timerLabel.text intValue]; int newTime = currentTime - 1; int displaySeconds = !(seconds % 60) ? 0 : seconds < 60 ? seconds : seconds - 60; int displayMinutes = floor(seconds / 60); NSString *time = [NSString stringWithFormat:@"%d:%@%d", displayMinutes, [[NSString stringWithFormat:@"%d", displaySeconds] length] == 1 ? @"0" : @"", displaySeconds ]; timerLabel.text = time; if(seconds == 0) { [timer invalidate]; } } (void)startOrStopTimer { if(timerIsRunning){ [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; } else { timer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(countDownOneSecond) userInfo:nil repeats:YES] retain]; [startOrStopButton setTitle:@"Stop" forState:UIControlStateNormal]; } timerIsRunning = !timerIsRunning; } (void)resetTimer { [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; [timer invalidate]; timerLabel.text = @"2:00"; } (void)dealloc { [window release]; [super dealloc]; } @end` thanks

    Read the article

  • Correct way to do timer function in Python

    - by bwawok
    Hi. I have a GUI application that needs to do something simple in the background (update a wx python progress bar, but that doesn't really matter). I see that there is a threading.timer class.. but there seems to be no way to make it repeat. So if I use the timer, I end up having to make a new thread on every single execution... like : import threading import time def DoTheDew(): print "I did it" t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() if __name__ == '__main__': t = threading.Timer(1, function=DoTheDew) t.daemon = True t.start() time.sleep(10) This seems like I am making a bunch of threads that do 1 silly thing and die.. why not write it as : import threading import time def DoTheDew(): while True: print "I did it" time.sleep(1) if __name__ == '__main__': t = threading.Thread(target=DoTheDew) t.daemon = True t.start() time.sleep(10) Am I missing some way to make a timer keep doing something? Either of these options seems silly... I am looking for a timer more like a java.util.Timer that can schedule the thread to happen every second... If there isn't a way in Python, which of my above methods is better and why?

    Read the article

  • Why the HelloWorld of opennlp library works fine on Java but doesn't work with Jruby?

    - by 0x90
    I am getting this error: SyntaxError: hello.rb:13: syntax error, unexpected tIDENTIFIER public HelloWorld( InputStream data ) throws IOException { The HelloWorld.rb is: require "java" import java.io.FileInputStream; import java.io.InputStream; import java.io.IOException; import opennlp.tools.postag.POSModel; import opennlp.tools.postag.POSTaggerME; public class HelloWorld { private POSModel model; public HelloWorld( InputStream data ) throws IOException { setModel( new POSModel( data ) ); } public void run( String sentence ) { POSTaggerME tagger = new POSTaggerME( getModel() ); String[] words = sentence.split( "\\s+" ); String[] tags = tagger.tag( words ); double[] probs = tagger.probs(); for( int i = 0; i < tags.length; i++ ) { System.out.println( words[i] + " => " + tags[i] + " @ " + probs[i] ); } } private void setModel( POSModel model ) { this.model = model; } private POSModel getModel() { return this.model; } public static void main( String args[] ) throws IOException { if( args.length < 2 ) { System.out.println( "HelloWord <file> \"sentence to tag\"" ); return; } InputStream is = new FileInputStream( args[0] ); HelloWorld hw = new HelloWorld( is ); is.close(); hw.run( args[1] ); } } when running ruby HelloWorld.rb "I am trying to make it work" when I run the HelloWorld.java "I am trying to make it work" it works perfectly, of course the .java doesn't contain the require java statement. EDIT: I followed the following steps. The output for jruby -v : jruby 1.6.7.2 (ruby-1.8.7-p357) (2012-05-01 26e08ba) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_35) [darwin-x86_64-java]

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Get more error information from unhandled error

    - by Andrew Simpson
    I am using C# in a desktop application. I am calling a DLL written in C that I do not have the source code for. Whenever I call this DLL I get an untrapped error which I trap in an UnhandledException event/delegate. The error is : object reference not set to an instance of an object But the stack trace is empty. When I Googled this the info back was that the error was being hanlded eleswhere and then rethrown. But this can only be in the DLL I do not have the source code for. So, is there anyway I can get more info about this error? This is my code... in program.cs... AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { try { Exception _ex = (Exception)e.ExceptionObject; //the stact trace property is empty here.. } finally { Application.Exit(); } } My DLL... [DllImport("AutoSearchDevice.dll", EntryPoint = "Start", ExactSpelling = false, CallingConvention = CallingConvention.StdCall)] public static extern int Start(int ASD_HANDLE); An I call it like so: public static void AutoSearchStart() { try { Start(m_pASD); } catch (Exception ex) { } }

    Read the article

  • facebook authentication / login trouble

    - by salmane
    I have setup facebook authentication using php and it goes something like this first getting the authorization here : https://graph.facebook.com/oauth/authorize?client_id=<?= $facebook_app_id ?>&redirect_uri=http://www.example.com/facebook/oauth/&scope=user_about_me,publish_stream then getting the access Token here : $url = "https://graph.facebook.com/oauth/access_token?client_id=".$facebook_app_id."&redirect_uri=http://www.example.com/facebook/oauth/&client_secret=".$facebook_secret."&code=".$code;" function get_string_between($string, $start, $end){ $string = " ".$string; $ini = strpos($string,$start); if ($ini == 0) return ""; $ini += strlen($start); $len = strpos($string,$end,$ini) - $ini; return substr($string,$ini,$len); } $access_token = get_string_between(file_get_contents($url), "access_token=", "&expires="); then getting user info : $facebook_user = file_get_contents('https://graph.facebook.com/me?access_token='.$access_token); $facebook_id = json_decode($facebook_user)->id; $first_name = json_decode($facebook_user)->first_name; $last_name = json_decode($facebook_user)->last_name; this is pretty ugly ( in my opinion ) but it works....how ever....the user is still not logged in...because i did not create or retrieve any session variables to confirm that the user is logged in to facebook... which means that after getting the authentication done the use still has to login .... first: is there a better way using php to do what i did above ? second: how do i set/ get session variable / cookies that ensure that the user doesnt have to click login thanks for your help

    Read the article

  • Tomcat deploy: make included scripts executable

    - by AlexS
    I'm devellopping a WebApplication (for Tomcat) using netbeans on Windows 7. For the Webapplication to run I need to run a insall-script once. This script (*.bat for windows and *.sh for linux is included in my war-file (WEB_INF). Now everytime I deploy the WAR-file and want to run the script on linux I have to call chmod +x install.sh first. Is there a way that this script can be made executable by default? I don't want to have to execute some extra commands after the deploy to make the script executable. For clarification: I'm not new to Linux and I know how to set executable-rights on files. That's not the problem. My problem is: What do I have to do, so that this script is executable right after tomcat deployed my *.war-file (unpacked it). If I would be using Linux for development as well, I would try to set the rights according in my sources (maybe I'll try it when I have a little more spare time). But I am using Windows and netbeans. Are there any attributes I can set to achive my goal, or is it possible to achive this using ant? By the way: Are there security related issues with this approach? The script looks for java executable and calls a javabased GUI-installer...

    Read the article

  • Address family not supported by protocol exception

    - by srg
    I'm trying to send a couple of values from an android application to a web service which I've setup. I'm using Http Post to send them but when I run the application I get the error- request time failed java.net.SocketException: Address family not supported by protocol. I get this while debugging with both the emulator as well as a device connected by wifi. I've already added the internet permission using: <uses-permission android:name="android.permission.INTERNET" /> This is the code i'm using to send the values void insertData(String name, String number) throws Exception { String url = "http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/"; HttpClient client = new DefaultHttpClient(); HttpPost post = new HttpPost(url); try { List<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("a", name)); params.add(new BasicNameValuePair("b", number)); post.setEntity(new UrlEncodedFormEntity(params)); HttpResponse response = client.execute(post); }catch(Exception e){ e.printStackTrace(); } Also I know that my web service work fine because when I send the values from an html page it works fine - <form name="form1" action="http://192.168.0.12:8000/testapp/default/call/run/insertdbdata/" method="post"> <input type="text" name="a"/> <input type="text" name="b"/> <input type="submit"/> I've seen questions of similar problems but haven't really found a solution. Thanks

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • make a thread which recieves values from other threads

    - by farteaga88
    This program in Java creates a list of 15 numbers and creates 3 threads to search for the maximum in a given interval. I want to create another thread that takes those 3 numbers and get the maximum. but i don't know how to get those values in the other thread. public class apple implements Runnable{ String name; int time, number, first, last, maximum; int[] array = {12, 32, 54 ,64, 656, 756, 765 ,43, 34, 54,5 ,45 ,6 , 5, 65}; public apple(String s, int f, int l){ name = s; first = f; last = l; maximum = array[0]; } public void run(){ try{ for(int i = first; i < last; i++ ) { if(maximum < array[i]) { maximum = array[i]; } } System.out.println("Thread"+ name + "maximum = " + maximum); }catch(Exception e){} } public static void main(String[] args){ Thread t1 = new Thread(new apple("1 ", 0, 5)); Thread t2 = new Thread(new apple("2 ", 5, 10 )); Thread t3 = new Thread(new apple("3 ", 10, 15)); try{ t1.start(); t2.start(); t3.start(); }catch(Exception e){} } }

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • Why my object sees variables which were not given to it in the constructor?

    - by Roman
    I have the following code. Which is "correct" and which I do not understand: private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater( new Runnable() { public void run() { label.setText("You have " + i + " seconds."); } } ); } I create a new instance of the Runnable class and then in the run method of this instance I use variables label and i. It works, but I do not understand why it work. Why the considered object sees values of these variables. According to my understanding the code should look like that (and its wrong): private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater(new Runnable(i,label) { public Runnable(int i, JLabel label) { this.i = i; this.label = label; } public void run() { label.setText("You have " + i + " seconds."); } }); } So, I would give the i and label variables to the constructor so the object can access them... By the way, in the updateGUI I use final before the i and label. I think I used final because compiler wanted that. But I do not understand why.

    Read the article

  • Blackberry Asynchronous HTTP Requests - How?

    - by Kai
    The app I'm working on has a self contained database. The only time I need HTTP request is when the user first loads the app. I do this by calling a class that verifies whether or not a local DB exists and, if not, create one with the following request: HttpRequest data = new HttpRequest("http://www.somedomain.com/xml", "GET", this); data.start(); This xml returns a list of content, all of which have images that I want to fetch AFTER the original request is complete and stored. So something like this won't work: HttpRequest data = new HttpRequest("http://www.somedomain.com/xml", "GET", this); data.start(); HttpRequest images = new HttpRequest("http://www.somedomain.com/xmlImages", "GET", this); images.start(); Since it will not treat this like an asynchronous request. I have not found much information on adding callbacks to httpRequest, or any other method I could use to ensure operation 2 does not execute until operation 1 is complete. Any help would be appreciated. Thanks

    Read the article

  • PHP curl timing mismatch

    - by JonoB
    I am running a php script that: queries a local database to retrieve an amount executes a curl statement to update an external database with the above amount + x queries the local database again to insert a new row reflecting that the curl statement has been executed. One of the problems that I am having is that the curl statement takes 2-4 seconds to execute, so I have two different users from the same company running the same script at the same time, the execution time of the curl command can cause a mismatch in what should be updated in the external database. This is the because the curl statement has not yet returned from the first user...so the second user is working off incorrect figures. I am not sure of the best options here, but basically I need to prevent two or more curl statements being run at the same time. I thought of storing a value in the database that indicates that the curl statement is being executed at that time, and prevent any other curl statements being run until its completed. Once the first curl statement has been executed, then the database flag is updated and the next one can run. If this field is 'locked', then I could loop through the code and sleep for (5) seconds, and then check again if the flag has been reset. If after (3) loops, then reset the flag automatically (i've never seen the curl take longer than 5 seconds) and continue processing. Are there any other (more elegant) ways of approaching this?

    Read the article

  • Deploying a WAR to tomcat only using a context descriptor

    - by DanglingElse
    i need to deploy a web application in WAR format to a remote tomcat6 server. The thing is that i don't want to do that the easy way, meaning not just copy/paste the WAR file to /webapps. So the second choice is to create a unique "Context Descriptor" and pointing this out to the WAR file. (Hope i got that right till here) So i have a few questions: Is the WAR file allowed to be anywhere in the file system? Meaning can i copy the WAR file anywhere in the remote file system, except /webapps or any other folder of the tomcat6 installation? Is there an easy way to test whether the deployment was successful or not? Without using any browser or anything, since i'm reaching to the remote server only via SSH and terminal. (I'm thinking ping?) Is it normal that the startup.sh/shutdown.sh don't exist? I'm not the admin of the server and don't know how the tomcat6 is installed. But i'm sure that in my local tomcat installations these files are in /bin and ready to use. I mean you can still start/restart/stop the tomcat etc., but not with the these -standard?- scripts. Thanks a lot.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >