Search Results

Search found 13366 results on 535 pages for 'non ascii'.

Page 498/535 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • SQL not yielding expected results

    - by AnonJr
    I have three tables related to this particular query: Lawson_Employees: LawsonID (pk), LastName, FirstName, AccCode (numeric) Lawson_DeptInfo: AccCode (pk), AccCode2 (don't ask, HR set up), DisplayName tblExpirationDates: EmpID (pk), ACLS (date), EP (date), CPR (date), CPR_Imported (date), PALS (date), Note The goal is to get the data I need to report on all those who have already expired in one or more certification, or are going to expire in the next 90 days. Some important notes: This is being run as part of a vbScript, so the 90-day date is being calculated when the script is run. I'm using 2010-08-31 as a placeholder since its the result at the time this question is being posted. All cards expire at the end of the month. (which is why the above date is for the end of August and not 90 days on the dot) A valid EP card supersedes ACLS certification, but only the latter is required of some employees. (wasn't going to worry about it until I got this question answered, but if I can get the help I'll take it) The CPR column contains the expiration date for the last class they took with us. (NULL if they didn't take any classes with us) The CPR_Imported column contains the expiration date for the last class they took somewhere else. (NULL if they didn't take it elsewhere, and bravo for following policy) The distinction between CPR classes is important for other reports. For purposes of this report, all we really care about is which one is the most current - or at least is currently current. If I have to, I'll ignore ACLS and PALS for the time being as it is non-compliance with CPR training that is the big issue at the moment. (not that the others won't be, but they weren't mentioned in the last meeting...) Here's the query I have so far, which is giving me good data: SELECT iEmp.LawsonID, iEmp.LastName, iEmp.FirstName, dept.AccCode2, dept.DisplayName, Exp.ACLS, Exp.EP, Exp.CPR, Exp.CPR_Imported, Exp.PALS, Exp.Note FROM (Lawson_Employees AS iEmp LEFT JOIN Lawson_DeptInfo AS dept ON dept.AccCode = iEmp.AccCode) LEFT JOIN tblExpirationDates AS Exp ON iEmp.LawsonID = Exp.EmpID WHERE iEmp.CurrentEmp = 1 AND ((Exp.ACLS <= #2010-08-31# AND Exp.ACLS IS NOT NULL) OR (Exp.CPR <= #2010-08-31# AND Exp.CPR_Imported <= #2010-08-31#) OR (Exp.PALS <= #2010-08-31# AND Exp.PALS IS NOT NULL)) ORDER BY dept.AccCode2, iEmp.LastName, iEmp.FirstName; After perusing the result set, I think I'm missing some expiration dates that should be in the result set. Am I missing something? This is the sucky part of being the only developer in the department... no one to ask for a little help.

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • Custom WM profile - issues with codec

    - by dominolog
    Hello I create my custom WM encoder profile. The reason I need a custom, non standard WM profile is that I need that the video resolution must be the same as input video stream. I created below profile but after I encode my video and audio with it, the WMP while loading says that the WMV1 codec is not found and prompts me for downloading WM encoder codecs. After installing them, the problem still exists. <profile version="589824" storageformat="1" name="mReplay Hi-End profile; WM Format 9; Audio &amp; Video" description="Streams: 1 audio 1 video"> <streamconfig majortype="{73647561-0000-0010-8000-00AA00389B71}" streamnumber="1" streamname="Audio Stream" inputname="Audio409" bitrate="320008" bufferwindow="-1" reliabletransport="0" decodercomplexity="" rfc1766langid="en-us" > <wmmediatype subtype="{00000161-0000-0010-8000-00AA00389B71}" bfixedsizesamples="1" btemporalcompression="0" lsamplesize="14861"> <waveformatex wFormatTag="353" nChannels="2" nSamplesPerSec="44100" nAvgBytesPerSec="40001" nBlockAlign="14861" wBitsPerSample="16" codecdata="008800000F0035E80000"/> </wmmediatype> </streamconfig> <streamconfig majortype="{73646976-0000-0010-8000-00AA00389B71}" streamnumber="2" streamname="Video Stream" inputname="Video409" bitrate="100000" bufferwindow="-1" reliabletransport="0" decodercomplexity="AU" rfc1766langid="en-us" vbrenabled="1" vbrquality="95" bitratemax="0" bufferwindowmax="0"> <videomediaprops maxkeyframespacing="80000000" quality="100"/> <wmmediatype subtype="{31564D57-0000-0010-8000-00AA00389B71}" bfixedsizesamples="0" btemporalcompression="1" lsamplesize="0"> <videoinfoheader dwbitrate="100000" dwbiterrorrate="0" avgtimeperframe="400000"> <rcsource left="0" top="0" right="0" bottom="0"/> <rctarget left="0" top="0" right="0" bottom="0"/> <bitmapinfoheader biwidth="0" biheight="0" biplanes="1" bibitcount="24" bicompression="WMV1" bisizeimage="0" bixpelspermeter="0" biypelspermeter="0" biclrused="0" biclrimportant="0"/> </videoinfoheader> </wmmediatype> </streamconfig> <streamprioritization> <stream number="1" mandatory="0"/> <stream number="2" mandatory="0"/> </streamprioritization> </profile>

    Read the article

  • Binding on a port with netpipes/netcat

    - by mindas
    I am trying to write a simple bash script that is listening on a port and responding with a trivial HTTP response. My specific issue is that I am not sure if the port is available and in case of bind failure I fall back to next port until bind succeeds. So far to me the easiest way to achieve this was something like: for (( i=$PORT_BASE; i < $(($PORT_BASE+$PORT_RANGE)); i++ )) do if [ $DEBUG -eq 1 ] ; then echo trying to bind on $i fi /usr/bin/faucet $i --out --daemon echo test 2>/dev/null if [ $? -eq 0 ] ; then #success? port=$i if [ $DEBUG -eq 1 ] ; then echo "bound on port $port" fi break fi done Here I am using faucet from netpipes Ubuntu package. The problem with this is that if I simply print "test" to the output, curl complains about non-standard HTTP response (error code 18). That's fair enough as I don't print HTTP-compatible response. If I replace echo test with echo -ne "HTTP/1.0 200 OK\r\n\r\ntest", curl still complains: user@server:$ faucet 10020 --out --daemon echo -ne "HTTP/1.0 200 OK\r\n\r\ntest" ... user@client:$ curl ip.of.the.server:10020 curl: (56) Failure when receiving data from the peer I think the problem lies in how faucet is printing the response and handling the connection. For example if I do the server side in netcat, curl works fine: user@server:$ echo -ne "HTTP/1.0 200 OK\r\n\r\ntest\r\n" | nc -l 10020 ... user@client:$ curl ip.of.the.server:10020 test user@client:$ I would be more than happy to replace faucet with netcat in my main script, but the problem is that I want to spawn independent server process to be able to run client from the same base shell. faucet has a very handy --daemon parameter as it forks to background and I can use $? (exit status code) to check if bind succeeded. If I was to use netcat for a similar purpose, I would have to fork it using & and $? would not work. Does anybody know why faucet isn't responding correctly in this particular case and/or can suggest a solution to this problem. I am not married neither to faucet nor netcat but would like the solution to be implemented using bash or it's utilities (as opposed to write something in yet another scripting language, such as Perl or Python).

    Read the article

  • BlackBerry OS 7.1 secured TLS connection is closed after very short time

    - by MrVincenzo
    To make a long story short: Same client-server configuration, same network topology, same device (Bold 9900) - works perfectly well on OS 7.0 but doesn't work as expected on OS 7.1 and the secured tls connection is being closed by the server after a very short time. My application opens a secured tls connection to a server. The connection is kept alive by a application layer keep-alive mechanism and remains open until the client closes it. Attached is a simplified version of the actual code that opens connection and reads from the socket. The code works perfectly on OS 5.0-7.0 but doesn't work as expected on OS 7.1. When running on OS 7.1, the blocking read() returns with -1 (end of the stream has been reached) after very short time (10-45 seconds). For OS 5.0-7.0 the call to read() remains blocking until next data arrives and the connection is never closed by the server. Connection connection = Connector.open(connectionString); connInputStream = connection.openInputStream(); while (true) { try { retVal = connInputStream.read(); if (-1 == retVal) { break; // end of stream has been reached } } catch (Exception e ) { // do error handling } // data read from stream is handled here } UPDATE 1: Apparently, the problem appears only when I use secured tls connection (either using mobile network or WiFi) on OS 7.1. Everything works as expected when opening a non secured connection on OS 7.1. For tls on mobile networks I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=false;ConnectionType=mds-public;EndToEndDesired"; For tls on Wifi I use the following connection string: connectionString = "tls://someipaddress:443;deviceside=true;interface=wifi;EndToEndRequired" UPDATE 2: The connection is never idle. I am constantly receiving and sending data on it. The issue appears both when using mobile connection and WiFi. The issue appears both on real OS 7.1 devices and simulators. I am starting to suspect that it is somehow related either to the connection string I am using or to the tls handshake. UPDATE 3: According to Wireshark's captures that I made with the OS 7.1 simulator, the secured tls connection is being closed by the server (client receives FIN). For the moment I don't have the server's private key therefore I unable to debug the tls handshake.

    Read the article

  • Twisted + SQLAlchemy and the best way to do it.

    - by Khorkrak
    So I'm writing yet another Twisted based daemon. It'll have an xmlrpc interface as usual so I can easily communicate with it and have other processes interchange data with it as needed. This daemon needs to access a database. We've been using SQL Alchemy in place of hard coding SQL strings for our latest projects - those mostly done for web apps in Pylons. We'd like to do the same for this app and re-use library code that makes use of SQL Alchemy. So what to do? Well of course since that library was written for use in a Pylons app it's all the straight-forward blocking style code that everyone is accustomed to and all of the non-blocking is magically handled by Pylons via threading, thread locals, scoped sessions and so on. So now for Twisted I guess I'm a bit stuck. I could: Just write the sql I need directly if it's minimal and use the dbapi pool in twisted to do runInteractions etc when I need to hit the db. Use the objects and inherently blocking methods in our library and block now and then in my Twisted daemon. Bah. Use sAsync which was last updated in 2008 and kind of reuse the models we have defined already but not really and it does address code that needs to work in Pylons either. Does that even work with the latest version SQL Alchemy? Who knows. That project looked great though - why was it apparently abandoned? Spawn a separate subprocess and have it deal with the library code and all it's blocking, the results being returned back to my daemon when ready as objects marshalled via YAML over xmlrpc. Use deferToThread and then expunge the objects returned having made sure to do eager loads so that I have all my stuff that I might need. Seems kind of ugha to me. I'm also stuck using Python 2.5.4 atm so no 2.6 yet and I don't think I can just do an import from future to get access to the cool new multiprocessing module stuff in there. That's OK though I guess as we've got dealing with interprocess communication down pretty well. So I'm leaning towards option 4 mostly as that would avoid the mortal sin of logic duplication with option 1 while also staying the heck away from threads. Any better ideas?

    Read the article

  • Multi-threaded random_r is slower than single threaded version.

    - by Nixuz
    The following program is essentially the same the one described here. When I run and compile the program using two threads (NTHREADS == 2), I get the following run times: real 0m14.120s user 0m25.570s sys 0m0.050s When it is run with just one thread (NTHREADS == 1), I get run times significantly better even though it is only using one core. real 0m4.705s user 0m4.660s sys 0m0.010s My system is dual core, and I know random_r is thread safe and I am pretty sure it is non-blocking. When the same program is run without random_r and a calculation of cosines and sines is used as a replacement, the dual-threaded version runs in about 1/2 the time as expected. #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define NTHREADS 2 #define PRNG_BUFSZ 8 #define ITERATIONS 1000000000 void* thread_run(void* arg) { int r1, i, totalIterations = ITERATIONS / NTHREADS; for (i = 0; i < totalIterations; i++){ random_r((struct random_data*)arg, &r1); } printf("%i\n", r1); } int main(int argc, char** argv) { struct random_data* rand_states = (struct random_data*)calloc(NTHREADS, sizeof(struct random_data)); char* rand_statebufs = (char*)calloc(NTHREADS, PRNG_BUFSZ); pthread_t* thread_ids; int t = 0; thread_ids = (pthread_t*)calloc(NTHREADS, sizeof(pthread_t)); /* create threads */ for (t = 0; t < NTHREADS; t++) { initstate_r(random(), &rand_statebufs[t], PRNG_BUFSZ, &rand_states[t]); pthread_create(&thread_ids[t], NULL, &thread_run, &rand_states[t]); } for (t = 0; t < NTHREADS; t++) { pthread_join(thread_ids[t], NULL); } free(thread_ids); free(rand_states); free(rand_statebufs); } I am confused why when generating random numbers the two threaded version performs much worse than the single threaded version, considering random_r is meant to be used in multi-threaded applications.

    Read the article

  • Having trouble understanding some code (Ruby on Rails)

    - by user284194
    I posted a question awhile ago asking how I could limit the rate at which a form could be submitted from a rails application. I was helped by a very patient user and their solution works great. The code was for my comments controller, and now I find myself wanting to add this functionality to another controller, my Messages controller. I immediately tried reusing the working code from the comments controller but I couldn't get it to work. Instead of asking for the working code, could someone please help me understand my working comment controller code? class CommentsController < ApplicationController #... before_filter :post_check def record_post_time cookies[:last_post_at] = Time.now.to_i end def last_post_time Time.at((cookies[:last_post_at].to_i rescue 0)) end MIN_POST_TIME = 2.minutes def post_check return true if (Time.now - last_post_time) > MIN_POST_TIME flash[:warning] = "You are trying to reply too fast." @message = Message.find(params[:message_id]) redirect_to(@message) return false end #... def create @message = Message.find(params[:message_id]) @comment = @message.comments.build(params[:comment]) if @comment.save record_post_time flash[:notice] = "Replied to \"#{@message.title}\"" redirect_to(@message) else render :action => "new" end end def update @message = Message.find(params[:message_id]) @comment = Comment.find(params[:id]) if @comment.update_attributes(params[:comment]) record_post_time redirect_to post_comment_url(@message, @comment) else render :action => "edit" end end #... end My Messages controller is pretty much a standard rails generated controller with a few before filters and associated private methods for DRYing up the code and a redirect for non existent pages. I'll explain how much of the code I understand. When a comment is created, a cookie is created with a last_post_time value. If they try to post another comment, the cookie is checked if the last one was made in the last two minutes. If it was a flash warning is displayed and no comment is recorded. What I don't really understand is how the post_check method works and how I can adapt it for my simpler posts controller. I thought I could reuse all the code in the message controller with the exception of the line: @message = Message.find(params[:message_id]) # (don't need the redirect code) in the post_check method. But it trips up on the "record_post_time" in the create action/method. I really want to understand this. Can someone explain why this doesn't work? I greatly appreciate you reading my lengthy question.

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • iOS dynamic object creation

    - by Abdul Ahmad
    I've worked with Xcode and iOS on a few personal projects and have always used non-object-oriented designs for everything... just because I've been doing mostly learning/experimenting with things. Now I've been trying to implement some object oriented design into one game I've made previously. The idea is, I have a space ship that can shoot bullets. In the past I basically added the UIImageView to the storyboard and then connected it to the .h file and from there did things on it like move it around or whatever (using CGPointMake). The idea now is to make a method (and a whole other class soon) that will create a UIImageView programmatically, allocate it, add it to the superview etc... I've got this working so far, easy stuff, but the next part is a bit harder. Where do I store this local variable "bullet"? I've been saving it to an NSMutableArray and doing the following: (actually here are the methods that I have) -(void)movement { for (int i = 0; i < [array1 count]; i ++) { UIImageView *a = [array1 objectAtIndex:i]; a.center = CGPointMake(a.center.x + 2, a.center.y); if (a.center.x > 500) { [array1 removeObjectAtIndex:i]; [a removeFromSuperview]; } } } -(void)getBullet { UIImageView *bullet = [[UIImageView alloc] initWithFrame:CGRectMake(ship.center.x + 20, ship.center.y - 2, 15, 3)]; bullet.image = [UIImage imageNamed:@"bullet2.png"]; bullet.hidden = NO; [self.view addSubview:bullet]; [array1 addObject:bullet]; } (by the way, array1 is declared in the .h file) and theres a timer that controls the movement method timer = [NSTimer scheduledTimerWithTimeInterval:0.5 target:self selector:@selector(movement) userInfo:nil repeats:YES]; first question is: what is the correct way of doing this? Storing a bullet for example until it is removed from the superview, should I store it another way? and another question is, when I remove a UIImageView from the superview, does that remove it from memory so its not using up system resources? Thank you for the help! (will update if I Think of other questions

    Read the article

  • What's causing this background-image to display "incorrectly" in Opera and Firefox?

    - by Sukasa
    I know this is something I'm probably doing wrong, so please don't incinerate me for the thread title. I'm trying to put together a small personal website using HTML 5/CSS3. I've checked with the w3c validator and the site and CSS file fully conform according to the validator (However the validator has a warning attached that it might not be perfect). I'm not sure how to explain it without a picture, so here's a comparison of Chrome/Opera/Firefox: So, you can sorta see how in Chrome the background image is in one non-repeating piece, whereas in Opera/Firefox the image has, oddly, been broken up and placed slightly differently. I'm confident this is due to an error on my part, but I've had no luck at all figuring out why the image is being mangled in Opera and Firefox. Here's the CSS that's relevant to this issue: /* Content Pane */ .content { position: absolute; left: 220px; width: 800px; top: 80px; min-height: 550px; background-color: rgba(8,12,42,0.85); } /* Headers */ .content hgroup { background: url("Header_Flat.png") no-repeat left top; min-height: 38px; padding-left: 28px; text-shadow: 0 0 8px #FFA9FF; color: Black; text-decoration: none; } .content hgroup h1 { display: block; } .content hgroup h3 { display: inline; position: relative; top: -12px; left: 20px; text-shadow: 0 0 6px #AFF9FF; } .content hgroup h4 { display: inline; position: relative; top: -12px; left: 20px; font-size: xx-small; text-shadow: 0 0 6px #AFF9FF; } And the HTML: <hgroup> <h1>New Site!</h1> <h3>Now with Bloom!</h3> <h4> - Posted Tuesday, May 11th 2010</h4> </hgroup> Can anyone see what I'm doing wrong?

    Read the article

  • Use multiple inheritance to discriminate useage roles?

    - by Arne
    Hi fellows, it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try.. Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time. Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator. class FlightPlan { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) WayPointIter First() const; WayPointIter Last() const; WayPointIter Active() const; void AdvanceActiveWayPoint() const; (...) }; My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role. class IFlightPlanActiveWayPoint { public: WayPointIter Active() const =0; void AdvanceActiveWayPoint() const =0; }; class IFlightPlanEditable { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) }; Thus the declaration of FlightPlan would only need to be changed to: class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable { (...) }; What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity? Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary. Thanks in advance!

    Read the article

  • how to invoke an activity of a library project from an android apps

    - by Austin
    I have an open source android code that I need to use in my android apps. It has all the source code as well as resource files, manifest files and class path. It can be compiled as a separate android apps. I have constraints for using the open source. 1. I can't change a single line of code. 2. I can't use it as a separate apps. These constraints are non negotiable. What I have done is I have compiled the open source as class library(in Eclipse: Project Properties-Android- Tick check box Is Library). This has resulted in generation of .class files(in bin) for the java files and resource files. This open source has an android activity that i want to open from my application. So I have linked the directory of these sets of class files in the source section of my java build path( in .classpath). I have declared the activity in my manifest file with proper action intent filters. Now when I am trying to call activity from my code, its not working. Cleaning and rebuilding doesn't help. However, if I build the open source project and my apps in the same workspace of eclipse and link the open source in my apps in exact same manner it works fine. I am not able to identify the difference. All settings seems to be same(all files are identical in both the cases). But only in the second case it works. I have tried it as jar file also. I have build the open source as project library and exported it into a jar file(excluding manifest file). But in that case I am getting the following error UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: .... Conversion to Dalvik format failed with error 1 This I guess is coming because the android library(2.2) has been included twice in my apps( one for building my apps & another for building the open source). I dont know how to avoid this. Cleaning the project doesn't help. What i require is to use the open source and invoking it's activities in my apps without violating the constraints. If i can use the open source as bunch of .class files then great, or else any other way will do fine. Please look into it and let me know. Thanks

    Read the article

  • get return value from 2 threads in C

    - by polslinux
    #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <stdint.h> #include <inttypes.h> typedef struct tmp_num{ int tmp_1; int tmp_2; }t_num; t_num t_nums; void *num_mezzo_1(void *num_orig); void *num_mezzo_2(void *num_orig); int main(int argc, char *argv[]){ pthread_t thread1, thread2; int tmp=0,rc1,rc2,num; num=atoi(argv[1]); if(num <= 3){ printf("Questo è un numero primo: %d\n", num); exit(0); } if( (rc1=pthread_create( &thread1, NULL, &num_mezzo_1, (void *)&num)) ){ printf("Creazione del thread fallita: %d\n", rc1); exit(1); } if( (rc2=pthread_create( &thread2, NULL, &num_mezzo_2, (void *)&num)) ){ printf("Creazione del thread fallita: %d\n", rc2); exit(1); } t_nums.tmp_1 = 0; t_nums.tmp_2 = 0; pthread_join(thread1, (void **)(&t_nums.tmp_1)); pthread_join(thread2, (void **)(&t_nums.tmp_2)); tmp=t_nums.tmp_1+t_nums.tmp_2; printf("%d %d %d\n", tmp, t_nums.tmp_1, t_nums.tmp_2); if(tmp>2){ printf("Questo NON è un numero primo: %d\n", num); } else{ printf("Questo è un numero primo: %d\n", num); } exit(0); } void *num_mezzo_1(void *num_orig){ int cont_1; int *n_orig=(int *)num_orig; t_nums.tmp_1 = 0; for(cont_1=1; cont_1<=(*n_orig/2); cont_1++){ if((*n_orig % cont_1) == 0){ (t_nums.tmp_1)++; } } pthread_exit((void *)(&t_nums.tmp_1)); return NULL; } void *num_mezzo_2(void *num_orig){ int cont_2; int *n_orig=(int *)num_orig; t_nums.tmp_2 = 0; for(cont_2=((*n_orig/2)+1); cont_2<=*n_orig; cont_2++){ if((*n_orig % cont_2) == 0){ (t_nums.tmp_2)++; } } pthread_exit((void *)(&t_nums.tmp_2)); return NULL; } How this program works: i have to input a number and this program will calculate if it is a prime number or not (i know that it is a bad algorithm but i only need to learn pthread). The problem is that the returned values are too much big.For example if i write "12" the value of tmp tmp_1 tmp_2 into the main are 12590412 6295204 6295208.Why i got those numbers??

    Read the article

  • Update Statement Updates 0 Rows via the C# Winform Application?

    - by peace
    First of all, please help me out! I can not take this anymore. I could not find where the error is located. Here is my problem: I'm trying to update a row via c# winform application. The update query generated from the application is formatted correctly. I tested it in the sql server environment, it worked well. When i run it from the application i get 0 rows updated. Here is the snippet that generates the update statement using reflection - don't try to figure it out. Carry on reading after the code portion: public void Update(int cusID) { SqlCommand objSqlCommand = new SqlCommand(); Customer cust = new Customer(); string SQL = null; try { if ((cusID != 0)) { foreach (PropertyInfo PropertyItem in this.GetType().GetProperties()) { if (!(PropertyItem.Name.ToString() == cust.PKName)) { if (PropertyItem.Name.ToString() != "TableName") { if (SQL == null) { SQL = PropertyItem.Name.ToString() + " = @" + PropertyItem.Name.ToString(); } else { SQL = SQL + ", " + PropertyItem.Name.ToString() + " = @" + PropertyItem.Name.ToString(); } } else { break; } } } objSqlCommand.CommandText = "UPDATE " + this.TableName + " SET " + SQL + " WHERE " + cust.PKName + " = @cusID AND PhoneNumber = " + "'" + "@phNum" + "'"; foreach (PropertyInfo PropertyItem in this.GetType().GetProperties()) { if (!(PropertyItem.Name.ToString() == cust.PKName)) { if (PropertyItem.Name.ToString() != "TableName") { objSqlCommand.Parameters.AddWithValue("@" + PropertyItem.Name.ToString(), PropertyItem.GetValue(this, null)); } else { break; } } } objSqlCommand.Parameters.AddWithValue("@cusID", cusID); objSqlCommand.Parameters.AddWithValue("@phNum", this.PhoneNumber); DAL.ExecuteSQL(objSqlCommand); } else { //AppEventLog.AddWarning("Primary Key is not provided for Update.") } } catch (Exception ex) { //AppEventLog.AddError(ex.Message.ToString) } } This part below: objSqlCommand.CommandText = "UPDATE " + this.TableName + " SET " + SQL + " WHERE " + cust.PKName + " = @cusID AND PhoneNumber = " + "'" + "@phNum" + "'"; generates dml: UPDATE CustomerPhone SET PhoneTypeID = @PhoneTypeID, PhoneNumber = @PhoneNumber WHERE CustomerID = @cusID AND PhoneNumber = '@phNum' @PhoneTypeID and @PhoneNumber are gotten from two properties. We assigned the value to these properties in the presentation layer from the user input text box. The portion below where fetches the values: objSqlCommand.Parameters.AddWithValue("@" + PropertyItem.Name.ToString(), PropertyItem.GetValue(this, null)); The code below fills the values of WHERE: objSqlCommand.Parameters.AddWithValue("@cusID", cusID); objSqlCommand.Parameters.AddWithValue("@phNum", this.PhoneNumber); The final code should look as: UPDATE CustomerPhone SET PhoneTypeID = 7, PhoneNumber = 999444 WHERE CustomerID = 500 AND PhoneNumber = '911'; Phone type id is 7 - user value that is taken from text box Phone number is 999444 - user value that is taken from text box The above final update statement works on the sql environment, but when running via the application, the execute non query runs ok and gets 0 rows updated! I wonder why?

    Read the article

  • How to make item view render rich (html) text in PyQt?

    - by Giorgio Gelardi
    I'm trying to translate code from this thread in python: import sys from PyQt4.QtCore import * from PyQt4.QtGui import * __data__ = [ "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.", "Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", "Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.", "Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum." ] def get_html_box(text): return '''<table border="0" width="100%"><tr width="100%" valign="top"> <td width="1%"><img src="softwarecenter.png"/></td> <td><table border="0" width="100%" height="100%"> <tr><td><b><a href="http://www.google.com">titolo</a></b></td></tr> <tr><td>{0}</td></tr><tr><td align="right">88/88/8888, 88:88</td></tr> </table></td></tr></table>'''.format(text) class HTMLDelegate(QStyledItemDelegate): def paint(self, painter, option, index): model = index.model() record = model.listdata[index.row()] doc = QTextDocument(self) doc.setHtml(get_html_box(record)) doc.setTextWidth(option.rect.width()) painter.save() ctx = QAbstractTextDocumentLayout.PaintContext() ctx.clip = QRectF(0, option.rect.top(), option.rect.width(), option.rect.height()) dl = doc.documentLayout() dl.draw(painter, ctx) painter.restore() def sizeHint(self, option, index): model = index.model() record = model.listdata[index.row()] doc = QTextDocument(self) doc.setHtml(get_html_box(record)) doc.setTextWidth(option.rect.width()) return QSize(doc.idealWidth(), doc.size().height()) class MyListModel(QAbstractListModel): def __init__(self, parent=None, *args): super(MyListModel, self).__init__(parent, *args) self.listdata = __data__ def rowCount(self, parent=QModelIndex()): return len(self.listdata) def data(self, index, role=Qt.DisplayRole): return index.isValid() and QVariant(self.listdata[index.row()]) or QVariant() class MyWindow(QWidget): def __init__(self, *args): super(MyWindow, self).__init__(*args) # listview self.lv = QListView() self.lv.setModel(MyListModel(self)) self.lv.setItemDelegate(HTMLDelegate(self)) self.lv.setResizeMode(QListView.Adjust) # layout layout = QVBoxLayout() layout.addWidget(self.lv) self.setLayout(layout) if __name__ == "__main__": app = QApplication(sys.argv) w = MyWindow() w.show() sys.exit(app.exec_()) Element's size and position are not calculated correctly I guess, perhaps because I haven't understand at all the style related parts from original code. Can someone help me?

    Read the article

  • Linux termios VTIME not working?

    - by San Jacinto
    We've been bashing our heads off of this one all morning. We've got some serial lines setup between an embedded linux device and an Ubuntu box. Our reads are getting screwed up because our code usually returns two (sometimes more, sometimes exactly one) message reads instead of one message read per actual message sent. Here is the code that opens the serial port. InterCharTime is set to 4. void COMBaseClass::OpenPort() { cerr<< "openning port"<< port <<"\n"; struct termios newtio; this->fd = -1; int fdTemp; fdTemp = open( port, O_RDWR | O_NOCTTY); if (fdTemp < 0) { portOpen = 0; cerr<<"problem openning "<< port <<". Retrying"<<endl; usleep(1000000); return; } newtio.c_cflag = BaudRate | CS8 | CLOCAL | CREAD ;//| StopBits; newtio.c_iflag = IGNPAR; newtio.c_oflag = 0; /* set input mode (non-canonical, no echo,...) */ newtio.c_lflag = 0; newtio.c_cc[VTIME] = InterCharTime; /* inter-character timer in .1 secs */ newtio.c_cc[VMIN] = readBufferSize; /* blocking read until 1 char received */ tcflush(fdTemp, TCIFLUSH); tcsetattr(fdTemp,TCSANOW,&newtio); this->fd = fdTemp; portOpen = 1; } The other end is configured similarly for communication, and has one small section of particular iterest: while (1) { sprintf(out, "\r\nHello world %lu", ++ulCount); puts(out); WritePort((BYTE *)out, strlen(out)+1); sleep(2); } //while Now, when I run a read thread on the receiving machine, "hello world" is usually broken up over a couple messages. Here is some sample output: 1: Hello 2: world 1 3: Hello 4: world 2 5: Hello 6: world 3 where number followed by a colon is one message recieved. Can you see any error we are making? Thank you. Edit: For clarity, please view section 3.2 of this resource href="http://www.faqs.org/docs/Linux-HOWTO/Serial-Programming-HOWTO.html. To my understanding, with a VTIME of a couple seconds (meaning vtime is set anywhere between 10 and 50, trial-and-error), and a VMIN of 1, there should be no reason that the message is broken up over two separate messages.

    Read the article

  • How to close all, or only some, tabs in Safari using AppleScript?

    - by Form
    I have made a very simple AppleScript to close all tabs in Safari. The problem is, it works, but not completely. The problem is that only a couple of tabs are closed. Here's the code: tell application "Safari" repeat with aWindow in windows repeat with aTab in tabs of aWindow if [some condition is encountered] then aTab close end if end repeat end repeat end tell I've also tried this script: tell application "Safari" repeat with i from 0 to the number of items in windows set aWindow to item i of windows repeat with j from 0 to the number of tabs in aWindow set aTab to item j of tabs of aWindow if [some condition is encountered] then aTab close end if end repeat end repeat end tell ... but it does not work either (same behavior). I tried that on my system (MacBook Pro jan 2008), as well as on a Mac Pro G5 under Tiger and the script fails on both, albeit with a much less descriptive error on Tiger. Running the script a few times closes a few tab each time until none is left, but always fails with the same error after closing a few tabs. Under Leopard I get an out of bounds error. Since I am using fast enumeration (not using "repeat from 0 to number of items in windows") I don't see how I can get an out of bounds error with this... My goal is to use the Cocoa Scripting Bridge to close tabs in Safari from my Objective-C Cocoa application but the Scripting Bridge fails in the same manner. The non-deletable tabs show as NULL in the Xcode debugger, while the other tabs are valid objects from which I can get values back (such as their title). In fact I tried with the Scripting Bridge first then told myself why not try this directly in AppleScript and I was surprised to see the same results. I must have a glaring omission or something in there... (seems like a bug in Safari AppleScript support to me... :S) I've used repeat loops and Obj-C 2.0 fast enumeration to iterate through collections before with zero problems, so I really don't see what's wrong here. Anyone can help? Thanks in advance!

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • Derived template override return type of member function C++

    - by Ruud v A
    I am writing matrix classes. Take a look at this definition: template <typename T, unsigned int dimension_x, unsigned int dimension_y> class generic_matrix { ... generic_matrix<T, dimension_x - 1, dimension_y - 1> minor(unsigned int x, unsigned int y) const { ... } ... } template <typename T, unsigned int dimension> class generic_square_matrix : public generic_matrix<T, dimension, dimension> { ... generic_square_matrix(const generic_matrix<T, dimension, dimension>& other) { ... } ... void foo(); } The generic_square_matrix class provides additional functions like matrix multiplication. Doing this is no problem: generic_square_matrix<T, 4> m = generic_matrix<T, 4, 4>(); It is possible to assign any square matrix to M, even though the type is not generic_square_matrix, due to the constructor. This is possible because the data does not change across children, only the supported functions. This is also possible: generic_square_matrix<T, 4> m = generic_square_matrix<T, 5>().minor(1,1); Same conversion applies here. But now comes the problem: generic_square_matrix<T, 4>().minor(1,1).foo(); //problem, foo is not in generic_matrix<T, 3, 3> To solve this I would like generic_square_matrix::minor to return a generic_square_matrix instead of a generic_matrix. The only possible way to do this, I think is to use template specialisation. But since a specialisation is basically treated like a separate class, I have to redefine all functions. I cannot call the function of the non-specialised class as you would do with a derived class, so I have to copy the entire function. This is not a very nice generic-programming solution, and a lot of work. C++ almost has a solution for my problem: a virtual function of a derived class, can return a pointer or reference to a different class than the base class returns, if this class is derived from the class that the base class returns. generic_square_matrix is derived from generic_matrix, but the function does not return a pointer nor reference, so this doesn't apply here. Is there a solution to this problem (possibly involving an entirely other structure; my only requirements are that the dimensions are a template parameter and that square matrices can have additional functionality). Thanks in advance, Ruud

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • Is this a problem typically solved with IOC?

    - by Dirk
    My current application allows users to define custom web forms through a set of admin screens. it's essentially an EAV type application. As such, I can't hard code HTML or ASP.NET markup to render a given page. Instead, the UI requests an instance of a Form object from the service layer, which in turn constructs one using a several RDMBS tables. Form contains the kind of classes you would expect to see in such a context: Form= IEnumerable<FormSections>=IEnumerable<FormFields> Here's what the service layer looks like: public class MyFormService: IFormService{ public Form OpenForm(int formId){ //construct and return a concrete implementation of Form } } Everything works splendidly (for a while). The UI is none the wiser about what sections/fields exist in a given form: It happily renders the Form object it receives into a functional ASP.NET page. A few weeks later, I get a new requirement from the business: When viewing a non-editable (i.e. read-only) versions of a form, certain field values should be merged together and other contrived/calculated fields should are added. No problem I say. Simply amend my service class so that its methods are more explicit: public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId){ //construct and a concrete implementation of Form //apply additional transformations to the form } } Again everything works great and balance has been restored to the force. The UI continues to be agnostic as to what is in the Form, and our separation of concerns is achieved. Only a few short weeks later, however, the business puts out a new requirement: in certain scenarios, we should apply only some of the form transformations I referenced above. At this point, it feels like the "explicit method" approach has reached a dead end, unless I want to end up with an explosion of methods (OpenFormViewingScenario1, OpenFormViewingScenario2, etc). Instead, I introduce another level of indirection: public interface IFormViewCreator{ void CreateView(Form form); } public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId, IFormViewCreator formViewCreator){ //construct a concrete implementation of Form //apply transformations to the dynamic field list return formViewCreator.CreateView(form); } } On the surface, this seems like acceptable approach and yet there is a certain smell. Namely, the UI, which had been living in ignorant bliss about the implementation details of OpenFormForViewing, must possess knowledge of and create an instance of IFormViewCreator. My questions are twofold: Is there a better way to achieve the composability I'm after? (perhaps by using an IoC container or a home rolled factory to create the concrete IFormViewCreator)? Did I fundamentally screw up the abstraction here?

    Read the article

  • Java Flow Control Problem

    - by Kyle_Solo
    I am programming a simple 2d game engine. I've decided how I'd like the engine to function: it will be composed of objects containing "events" that my main game loop will trigger when appropriate. A little more about the structure: Every GameObject has an updateEvent method. objectList is a list of all the objects that will receive update events. Only objects on this list have their updateEvent method called by the game loop. I’m trying to implement this method in the GameObject class (This specification is what I’d like the method to achieve): /** * This method removes a GameObject from objectList. The GameObject * should immediately stop executing code, that is, absolutely no more * code inside update events will be executed for the removed game object. * If necessary, control should transfer to the game loop. * @param go The GameObject to be removed */ public void remove(GameObject go) So if an object tries to remove itself inside of an update event, control should transfer back to the game engine: public void updateEvent() { //object's update event remove(this); System.out.println("Should never reach here!"); } Here’s what I have so far. It works, but the more I read about using exceptions for flow control the less I like it, so I want to see if there are alternatives. Remove Method public void remove(GameObject go) { //add to removedList //flag as removed //throw an exception if removing self from inside an updateEvent } Game Loop for(GameObject go : objectList) { try { if (!go.removed) { go.updateEvent(); } else { //object is scheduled to be removed, do nothing } } catch(ObjectRemovedException e) { //control has been transferred back to the game loop //no need to do anything here } } // now remove the objects that are in removedList from objectList 2 questions: Am I correct in assuming that the only way to implement the stop-right-away part of the remove method as described above is by throwing a custom exception and catching it in the game loop? (I know, using exceptions for flow control is like goto, which is bad. I just can’t think of another way to do what I want!) For the removal from the list itself, it is possible for one object to remove one that is farther down on the list. Currently I’m checking a removed flag before executing any code, and at the end of each pass removing the objects to avoid concurrent modification. Is there a better, preferably instant/non-polling way to do this?

    Read the article

  • Large transactions causing "System.Data.SqlClient.SqlException: Timeout expired" error?

    - by Michael
    My application requires a user to log in and allows them to edit a list of things. However, it seems that if the same user always logs in and out and edits the list, this user will run into a "System.Data.SqlClient.SqlException: Timeout expired." error. I've read comments about increasing the timeout period but I've also read a comment about it possibly caused by uncommitted transactions. And I do have one going in the application. I'll provide the code I'm working with and there is an IF statement in there that I was a little iffy about but it seemed like a reasonable thing to do. I'll just go over what's going on here, there is a list of objects to update or add into the database. New objects created in the application are given an ID of 0 while existing objects have their own ID's generated from the DB. If the user chooses to delete some objects, their IDs are stored in a separate list of Integers. Once the user is ready to save their changes, the two lists are passed into this method. By use of the IF statement, objects with ID of 0 are added (using the Add stored procedure) and those objects with non-zero IDs are updated (using the Update stored procedure). After all this, a FOR loop goes through all the integers in the "removal" list and uses the Delete stored procedure to remove them. A transaction is used for all this. Public Shared Sub UpdateSomethings(ByVal SomethingList As List(Of Something), ByVal RemovalList As List(Of Integer)) Using DBConnection As New SqlConnection(conn) DBConnection.Open() Dim MyTransaction As SqlTransaction MyTransaction = DBConnection.BeginTransaction() Try For Each SomethingItem As Something In SomethingList Using MyCommand As New SqlCommand() MyCommand.Connection = DBConnection If SomethingItem.ID > 0 Then MyCommand.CommandText = "UpdateSomething" Else MyCommand.CommandText = "AddSomething" End If MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters If MyCommand.CommandText = "UpdateSomething" Then .Add("@id", SqlDbType.Int).Value = SomethingItem.ID End If .Add("@stuff", SqlDbType.Varchar).Value = SomethingItem.Stuff End With MyCommand.ExecuteNonQuery() End Using Next For Each ID As Integer In RemovalList Using MyCommand As New SqlCommand("DeleteSomething", DBConnection) MyCommand.Transaction = MyTransaction MyCommand.CommandType = CommandType.StoredProcedure With MyCommand.Parameters .Add("@id", SqlDbType.Int).Value = ID End With MyCommand.ExecuteNonQuery() End Using Next MyTransaction.Commit() Catch ex As Exception MyTransaction.Rollback() 'Exception handling goes here End Try End Using End Sub There are three stored procedures used here as well as some looping so I can see how something can be holding everything up if the list is large enough. Other users can log in to the system at the same time just fine though. I'm using Visual Studio 2008 to debug and am using SQL Server 2000 for the DB.

    Read the article

  • NSString cannot be released

    - by Stanley
    Consider the following method and the caller code block. The method analyses a NSString and extracts a "http://" string which it passes out by reference as an auto release object. Without releasing g_scan_result, the program works as expected. But according to non-arc rules, g_scan_result should be released since a retain has been called against it. My question are : Why g_scan_result cannot be released ? Is there anything wrong the way g_scan_result is handled in the posted coding below ? Is it safe not to release g_scan_result as long as the program runs correctly and the XCode Memory Leak tool does not show leakage ? Which XCode profile tools should I look into to check and under which subtitle ? Hope somebody knowledgeable could help. - (long) analyse_scan_result :(NSString *)scan_result target_url :(NSString **)targ_url { NSLog (@" RES analyse string : %@", scan_result); NSRange range = [scan_result rangeOfString:@"http://" options:NSCaseInsensitiveSearch]; if (range.location == NSNotFound) { *targ_url = @""; NSLog(@"fnd string not found"); return 0; } NSString *sub_string = [scan_result substringFromIndex : range.location]; range = [sub_string rangeOfString : @" "]; if (range.location != NSNotFound) { sub_string = [sub_string substringToIndex : range.location]; } NSLog(@" FND sub_string = %@", sub_string); *targ_url = sub_string; return [*targ_url length]; } The following is the caller code block, also note that g_scan_result has been declared and initialized (on another source file) as : NSString *g_scan_result = nil; Please do send a comment or answer if you have suggestions or find possible errors in code posted here (or above). Xcode memory tools does not seem to show any memory leak. But it may be because I do not know where to look as am new to the memory tools. { long url_leng = [self analyse_scan_result:result target_url:&targ_url]; NSLog(@" TAR target_url = %@", targ_url); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Scanned Result" message:result delegate:g_alert_view_delegate cancelButtonTitle:@"OK" otherButtonTitles:nil]; if (url_leng) { // ****** The 3 commented off statements // ****** cannot be added without causing // ****** a crash after a few scan result // ****** cycles. // ****** NSString *t_url; if (g_system_status.language_code == 0) [alert addButtonWithTitle : @"Open"]; else if (g_system_status.language_code == 1) [alert addButtonWithTitle : @"Abrir"]; else [alert addButtonWithTitle : @"Open"]; // ****** t_url = g_scan_result; g_scan_result = [targ_url retain]; // ****** [t_url release]; } targ_url = nil; [alert show]; [alert release]; [NSTimer scheduledTimerWithTimeInterval:5.0 target:self selector:@selector(activate_qr_scanner:) userInfo:nil repeats:NO ]; return; }

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >