Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 232/254 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Python form POST using urllib2 (also question on saving/using cookies)

    - by morpheous
    I am trying to write a function to post form data and save returned cookie info in a file so that the next time the page is visited, the cookie information is sent to the server (i.e. normal browser behavior). I wrote this relatively easily in C++ using curlib, but have spent almost an entire day trying to write this in Python, using urllib2 - and still no success. This is what I have so far: import urllib, urllib2 import logging # the path and filename to save your cookies in COOKIEFILE = 'cookies.lwp' cj = None ClientCookie = None cookielib = None logger = logging.getLogger(__name__) # Let's see if cookielib is available try: import cookielib except ImportError: logger.debug('importing cookielib failed. Trying ClientCookie') try: import ClientCookie except ImportError: logger.debug('ClientCookie isn\'t available either') urlopen = urllib2.urlopen Request = urllib2.Request else: logger.debug('imported ClientCookie succesfully') urlopen = ClientCookie.urlopen Request = ClientCookie.Request cj = ClientCookie.LWPCookieJar() else: logger.debug('Successfully imported cookielib') urlopen = urllib2.urlopen Request = urllib2.Request # This is a subclass of FileCookieJar # that has useful load and save methods cj = cookielib.LWPCookieJar() login_params = {'name': 'anon', 'password': 'pass' } def login(theurl, login_params): init_cookies(); data = urllib.urlencode(login_params) txheaders = {'User-agent' : 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'} try: # create a request object req = Request(theurl, data, txheaders) # and open it to return a handle on the url handle = urlopen(req) except IOError, e: log.debug('Failed to open "%s".' % theurl) if hasattr(e, 'code'): log.debug('Failed with error code - %s.' % e.code) elif hasattr(e, 'reason'): log.debug("The error object has the following 'reason' attribute :"+e.reason) sys.exit() else: if cj is None: log.debug('We don\'t have a cookie library available - sorry.') else: print 'These are the cookies we have received so far :' for index, cookie in enumerate(cj): print index, ' : ', cookie # save the cookies again cj.save(COOKIEFILE) #return the data return handle.read() # FIXME: I need to fix this so that it takes into account any cookie data we may have stored def get_page(*args, **query): if len(args) != 1: raise ValueError( "post_page() takes exactly 1 argument (%d given)" % len(args) ) url = args[0] query = urllib.urlencode(list(query.iteritems())) if not url.endswith('/') and query: url += '/' if query: url += "?" + query resource = urllib.urlopen(url) logger.debug('GET url "%s" => "%s", code %d' % (url, resource.url, resource.code)) return resource.read() When I attempt to log in, I pass the correct username and pwd,. yet the login fails, and no cookie data is saved. My two questions are: can anyone see whats wrong with the login() function, and how may I fix it? how may I modify the get_page() function to make use of any cookie info I have saved ?

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • iPhone: value of selectedIndex for tab should be consistent, but isn't

    - by Janine
    This should be so simple... but something screwy is happening. My setup looks like this: MainViewController Tab Bar Controller 4 tabs, each of which loads WebViewController My AppDelegate contains an ivar, tabBarController, which is connected to the tab bar controller (this was all set up in Interface Builder). The leftmost tab is marked "selected" in IB. Within the viewWillAppear method in WebViewController, I need to know which tab was just selected so I can load the correct URL. I do this by switching on appDelegate.tabBarController.selectedIndex. When the app first runs and the leftmost tab is selected, selectedIndex is a large garbage value. After that, I get values from 0 to 3, which is as it should be, but they are in random order. Not only that, but each tab I touch reports a different value each time. This app is extremely simple right now and I can't imagine what I could have done to make things go this wrong. Has anyone seen (and hopefully solved) this behavior? Update: we have a request for code. There's not much to see. The tab bar controller gets loaded in applicationDidFinishLaunching: [self.mainViewController view]; //force nib to load [self.window addSubview:self.mainViewController.tabBarController.view] There is currently no code whatsoever in MainViewController.m other than the synthesize and release for tabBarController. From WebVewController.m: - (void)viewWillAppear:(BOOL)_animation { [super viewWillAppear:_animation]; NSURL *url; switch([S_UIDelegate mainViewController].tabBarController.selectedIndex) { case 0: url = [NSURL URLWithString:@"http://www.cnn.com"]; break; case 1: url = [NSURL URLWithString:@"http://www.facebook.com"]; break; case 2: url = [NSURL URLWithString:@"http://www.twitter.com"]; break; case 3: url = [NSURL URLWithString:@"http://www.google.com"]; break; default: url = [NSURL URLWithString:@"http://www.msnbc.com"]; } NSURLRequest *request = [NSURLRequest requestWithURL:url]; [webView loadRequest:request]; } This is where I'm seeing the random values.

    Read the article

  • CakePHP Multiple Nested Joins

    - by Paul
    I have an App in which several of the models are linked by hasMany/belongsTo associations. So for instance, A hasMany B, B hasMany C, C hasMany D, and D hasMany E. Also, E belongs to D, D belongs to C, C belongs to B, and B belongs to A. Using the Containable behavior has been great for controlling the amount of information comes back with each query, but I seem to be having a problem when trying to get data from table A while using a condition that involves table D. For instance, here is an example of my 'A' model: class A extends AppModel { var $name = 'A'; var $hasMany = array( 'B' => array('dependent' => true) ); function findDependentOnE($condition) { return $this->find('all', array( 'contain' => array( 'B' => array( 'C' => array( 'D' => array( 'E' => array( 'conditions' => array( 'E.myfield' => $some_value ) ) ) ) ) ) )); } } This still gives me back all the records in 'A', and if it's related 'E' records don't satisfy the condition, then I just get this: Array( [0] => array( [A] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [B] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [C] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [D] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [E] => array( // empty if 'E.myfield' != $some_value' ) ), [1] => array( // ...etc ) ) When If 'E.myfield' != $some_value, I don't want the record returned at all. I hope this expresses my problem clearly enough... Basically, I want the following query, but in a database-agnostic/CakePHP-y kind of way: SELECT * FROM A INNER JOIN (B INNER JOIN (C INNER JOIN (D INNER JOIN E ON D.id=E.d_id) ON C.id=D.c_id) ON B.id=C.b_id) ON A.id=B.a_id WHERE E.myfield = $some_value

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • Symfony2 - PdfBundle not working

    - by ElPiter
    Using Symfony2 and PdfBundle to generate dynamically PDF files, I don't get to generate the files indeed. Following documentation instructions, I have set up all the bundle thing: autoload.php: 'Ps' => __DIR__.'/../vendor/bundles', 'PHPPdf' => __DIR__.'/../vendor/PHPPdf/lib', 'Imagine' => array(__DIR__.'/../vendor/PHPPdf/lib', __DIR__.'/../vendor/PHPPdf/lib/vendor/Imagine/lib'), 'Zend' => __DIR__.'/../vendor/PHPPdf/lib/vendor/Zend/library', 'ZendPdf' => __DIR__.'/../vendor/PHPPdf/lib/vendor/ZendPdf/library', AppKernel.php: ... new Ps\PdfBundle\PsPdfBundle(), ... I guess all the setting up is correctly configured, as I am not getting any "library not found" nor anything on that way... So, after all that, I am doing this in the controller: ... use Ps\PdfBundle\Annotation\Pdf; ... /** * @Pdf() * @Route ("/pdf", name="_pdf") * @Template() */ public function generateInvoicePDFAction($name = 'Pedro') { return $this->render('AcmeStoreBundle:Shop:generateInvoice.pdf.twig', array( 'name' => $name, )); } And having this twig file: <pdf> <dynamic-page> Hello {{ name }}! </dynamic-page> </pdf> Well. Somehow, what I just get in my page is just the normal html generated as if it was a normal Response rendering. The Pdf() annotation is supposed to give the "special" behavior of creating the PDF file instead of rendering normal HTML. So, having the above code, when I request the route http://www.mysite.com/*...*/pdf, all what I get is the following HTML rendered: <pdf> <dynamic-page> Hello Pedro! </dynamic-page> </pdf> (so a blank HTML page with just the words Hello Pedro! on it. Any clue? Am I doing anything wrong? Is it mandatory to have the alternative *.html.twig apart from the *.pdf.twig version? I don't think so... :(

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • Problems using wxWidgets (wxMSW) within multiple DLL instances

    Preface I'm developing VST-plugins which are DLL-based software modules and loaded by VST-supporting host applications. To open a VST-plugin the host applications loads the VST-DLL and calls an appropriate function of the plugin while providing a native window handle, which the plugin can use to draw it's GUI. I managed to port my original VSTGUI code to the wxWidgets-Framework and now all my plugins run under wxMSW and wxMac but I still have problems under wxMSW to find a correct way to open and close the plugins and I am not sure if this is a wxMSW-only issue. Problem If I use any VST-host application I can open and close multiple instances of one of my VST-plugins without any problems. As soon as I open another of my VST-plugins besides my first VST-plugin and then close all instances of my first VST-plugin the application crashes after a short amount of time within the wxEventHandlerr::ProcessEvent function telling me that the wxTheApp object isn't valid any longer during execution of wxTheApp-FilterEvent (see below). So it seems to be that the wxTheApp objects was deleted after closing all instances of the first plugin and is no longer available for the second plugin. bool wxEvtHandler::ProcessEvent(wxEvent& event) { // allow the application to hook into event processing if ( wxTheApp ) { int rc = wxTheApp->FilterEvent(event); if ( rc != -1 ) { wxASSERT_MSG( rc == 1 || rc == 0, _T("unexpected wxApp::FilterEvent return value") ); return rc != 0; } //else: proceed normally } .... } Preconditions 1.) All my VST-plugins a dynamically linked against the C-Runtime and wxWidgets libraries. With regard to the wxWidgets forum this seemed to be the best way to run multiple instances of the software side by side. 2.) The DllMain of each VST-Plugin is defined as follows: // WXW #include "wx/app.h" #include "wx/defs.h" #include "wx/gdicmn.h" #include "wx/image.h" #ifdef __WXMSW__ #include <windows.h> #include "wx/msw/winundef.h" BOOL APIENTRY DllMain ( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: { wxInitialize(); ::wxInitAllImageHandlers(); break; } case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: wxUninitialize(); break; } return TRUE; } #endif // __WXMSW__ class Application : public wxApp {}; IMPLEMENT_APP_NO_MAIN(Application) Question How can I prevent this behavior respectively how can I properly handle the wxTheApp object if I have multiple instances of different VST-plugins (DLL-modules), which are dynamically linked against the C-Runtime and wxWidgets libraries? Best reagards, Steffen

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • One entityManger finds entity , the other does not.

    - by Pitelk
    Hi all, I have a very strange behavior in my program. I have 2 classes (class LogIn and CreateGame) where i have injected an EntityManager in each using the annotation @PersistenceContext(unitName="myUnitPU") EntityManager entitymanger; In some point i remove an object called "user" from the database using entitymanger.remove(user) from a method in LogIn class. The business logic is that a user can host and join games ( in the same time) so removing the user all the entries in database about the games the user has created are removed and all the entries showing in which games the user has joined are removed also. After that, i call another function which checks if the user exists using a method in the LogIn class entitymanager.find(user) which surprisingly enough, finds the user. After that I call a method in CreateGame class which tries to find the user by using again entitymanger.find(user) the entitymanger in that class fails to find the user (which is the expected result as the user is removed and it's not in the database) So the question is : Why the entitymanager in one class finds the user (which is wrong) where the other doesn't find it? Does anyone has ever the same problem? PS : This "bug" occurs when the user has hosted a game which is joined by another user (lets call him Buser) and the Buser has made a game which is joined by the current user. GAME | HOST | CLIENTS game1 | user | userB game2 | userB | user where in this case by removing the user, the game1 is deleted and the user is removed from game2 so the result is GAME | HOST | CLIENTS game2 | userB | PS2 : The Beans are EJB3.0. The methods are called from a delegate class. The beans in the delegate class are instantiated using the InitialContext.lookup() method. Note that for logging in ,creating , joining games the appropriate delegate class calls the correspondent EJB which does the transactions. In the case of logOut, the delegate calls an EJB to logout the user but becuase other stuff must be done (as said above) this EJB calls other EJB (again using lookup() ) which has methods like removegame(), removeUserFromGame() etc. After those methods are executed the user is then logged out. Maybe it has something to do with the fact the the first entity manager is called by a delegate but the second from inside an EJb and thats why the one entitymanger can see the non-existent user while the other cannot? Also all the methods have TRANSACTIONTYPE.REQUIRED Thank you in advance

    Read the article

  • Gap appears between navigation bar and view after rotating & tab switching

    - by Bogatyr
    My iphone application is showing strange behavior when rotating: a gap appears between the navigation title and content view inside a tab bar view (details on how to reproduce are below). I've created a tiny test case that exhibits the same problem: a custom root UIViewController, which creates and displays a UITabBarController programmatically, which has two tabs: 1) plain UIViewController, and 2) UINavigationController created programmatically with a single plain UIViewController content view. The complete code for the application is in the root controller's viewDidLoad (every "*VC" class is a totally vanilla UIViewController subclass with XIB for user interface from XCode, with only the view background color changed to clearly identify each view, nothing else). Here's the viewDidLoad code, and the shouldAutorotateToInterfaceOrientation code, this code is the entire application basically: - (void)viewDidLoad { [super viewDidLoad]; FirstVC *fvc = [[FirstVC alloc] initWithNibName:@"FirstVC" bundle:nil]; NavContentsVC *ncvc = [[NavContentsVC alloc] initWithNibName:@"NavContentsVC" bundle:nil]; UINavigationController *svc = [[UINavigationController alloc] initWithRootViewController:ncvc]; NSMutableArray *localControllersArray = [[NSMutableArray alloc] initWithCapacity:2]; [localControllersArray addObject:fvc]; [localControllersArray addObject:svc]; fvc.title = @"FirstVC-Title"; ncvc.title = @"NavContents-Title"; UITabBarController *tbc = [[UITabBarController alloc] init]; tbc.view.frame = CGRectMake(0, 0, 320, 460); [tbc setViewControllers:localControllersArray]; [self.view addSubview:tbc.view]; [localControllersArray release]; [ncvc release]; [svc release]; [fvc release]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } Here's how to reproduce the problem: 1) start application 2) rotate device (happens in simulator, too) to landscape (UITabBar properly rotates) 3) click on tab 2 4) rotate device to portrait -- notice gap of root view controller's background color of about 10 pixels high beneath the Navigation title bar and the Navigation content view. 5) click tab 1 6) click tab 2 And the gap is gone! From my real application, I see that the gap remains during all VC push and pops while the NavigationController tab is active. Switching away to a different tab and back to the Nav tab clears up the gap. What am I doing wrong? I'm running on SDK 3.1.3, this happens both on the simulator and on the device. Except for this particular sequence, everything seems to work fine. Help!

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • .NET WinForms INotifyPropertyChanged updates all bindings when one is changed. Better way?

    - by Dave Welling
    In a windows forms application, a property change that triggers INotifyPropertyChanged, will result in the form reading EVERY property from my bound object, not just the property changed. (See example code below) This seems absurdly wasteful since the interface requires the name of the changing property. It is causing a lot of clocking in my app because some of the property getters require calculations to be performed. I'll likely need to implement some sort of logic in my getters to discard the unnecessary reads if there is no better way to do this. Am I missing something? Is there a better way? Don't say to use a different presentation technology please -- I am doing this on Windows Mobile (although the behavior happens on the full framework as well). Here's some toy code to demonstrate the problem. Clicking the button will result in BOTH textboxes being populated even though one property has changed. using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace Example { public class ExView : Form { private Presenter _presenter = new Presenter(); public ExView() { this.MinimizeBox = false; TextBox txt1 = new TextBox(); txt1.Parent = this; txt1.Location = new Point(1, 1); txt1.Width = this.ClientSize.Width - 10; txt1.DataBindings.Add("Text", _presenter, "SomeText1"); TextBox txt2 = new TextBox(); txt2.Parent = this; txt2.Location = new Point(1, 40); txt2.Width = this.ClientSize.Width - 10; txt2.DataBindings.Add("Text", _presenter, "SomeText2"); Button but = new Button(); but.Parent = this; but.Location = new Point(1, 80); but.Click +=new EventHandler(but_Click); } void but_Click(object sender, EventArgs e) { _presenter.SomeText1 = "some text 1"; } } public class Presenter : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _SomeText1 = string.Empty; public string SomeText1 { get { return _SomeText1; } set { _SomeText1 = value; _SomeText2 = value; // <-- To demonstrate that both properties are read OnPropertyChanged("SomeText1"); } } private string _SomeText2 = string.Empty; public string SomeText2 { get { return _SomeText2; } set { _SomeText2 = value; OnPropertyChanged("SomeText2"); } } private void OnPropertyChanged(string PropertyName) { PropertyChangedEventHandler temp = PropertyChanged; if (temp != null) { temp(this, new PropertyChangedEventArgs(PropertyName)); } } } }

    Read the article

  • Calling an Excel Add-In method from C# application or vice versa

    - by Jude
    I have an Excel VBA add-in with a public method in a bas file. This method currently creates a VB6 COM object, which exists in a running VB6 exe/vbp. The VB6 app loads in data and then the Excel add-in method can call methods on the VB6 COM object to load the data into an existing Excel xls. This is all currently working. We have since converted our VB6 app to C#. My question is: What is the best/easiest way to mimic this behavior with the C#/.NET app? I'm thinking I may not be able to pull the data from the .NET app into Excel from the add-in method since the .Net app needs to be running with data loaded (so no using a stand-alone C# class library). Maybe we can, instead, push the data from .NET to Excel by accessing the VBA add-in method from the C# code? The following is the existing VBA method accessing the VB6 app: Public Sub UpdateInDataFromApp() Dim wkbInData As Workbook Dim oFPW As Object Dim nMaxCols As Integer Dim nMaxRows As Integer Dim j As Integer Dim sName As String Dim nCol As Integer Dim nRow As Integer Dim sheetCnt As Integer Dim nDepth As Integer Dim sPath As String Dim vData As Variant Dim SheetRange As Range Set wkbInData = wkbOpen("InData.xls") sPath = g_sPathXLSfiles & "\" 'Note: the following will bring up fpw app if not already running Set oFPW = CreateObject("FPW.CProfilesData") If oFPW Is Nothing Then MsgBox "Unable to reference " & sApp Else . . . sheetCnt = wkbInData.Sheets.Count 'get number of sheets in indata workbook For j = 2 To sheetCnt 'set counter to loop over all sheets except the first one which is not input data fields With wkbInData.Worksheets(j) Set SheetRange = .UsedRange End With With SheetRange nMaxRows = .Rows.Count 'get range of sheet(j) nMaxCols = .Columns.Count 'get range of sheet(j) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearContents 'Clears data from data range (51 Columns) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearComments End With With oFPW 'vb6 object For nRow = 2 To nMaxRows ' loop through rows sName = SheetRange.Cells(nRow, 1) 'Field name vData = .vntGetSymbol(sName, 0) 'Check if vb6 app identifies the name nDepth = .GetInputTableDepth(sName) 'Get number of data items for this field name from vb6 app nMaxCols = nDepth + 2 'nDepth=0, is single data item For nCol = 2 To nMaxCols 'loop over deep screen fields nDepth = nCol - 2 'current depth vData = .vntGetSymbol(sName, nDepth) 'Get Data from vb6 app If LenB(vData) > 0 And IsNumeric(vData) Then 'Check if data returned SheetRange.Cells(nRow, nCol) = vData 'Poke the data in Else SheetRange.Cells(nRow, nCol) = vData 'Poke a zero in End If Next 'nCol Next 'nRow End With Set SheetRange = Nothing Next 'j End If Set wkbInData = Nothing Set oFPW = Nothing Exit Sub . . . End Sub Any help would be appreciated.

    Read the article

  • Issues in Ada Concurrency

    - by Arkapravo
    Hi I need some help and also some insight. This is a program in Ada-2005 which has 3 tasks. The output is 'z'. If the 3 tasks do not happen in the order of their placement in the program then output can vary from z = 2, z = 1 to z = 0 ( That is easy to see in the program, mutual exclusion is attempted to make sure output is z = 2). WITH Ada.Text_IO; USE Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Integer_Text_IO; WITH System; USE System; procedure xyz is x : Integer := 0; y : Integer := 0; z : Integer := 0; task task1 is pragma Priority(System.Default_Priority + 3); end task1; task task2 is pragma Priority(System.Default_Priority + 2); end task2; task task3 is pragma Priority(System.Default_Priority + 1); end task3; task body task1 is begin x := x + 1; end task1; task body task2 is begin y := x + y; end task2; task body task3 is begin z := x + y + z; end task3; begin Put(" z = "); Put(z); end xyz; I first tried this program (a) without pragmas, the result : In 100 tries, occurence of 2: 86, occurence of 1: 10, occurence of 0: 4. Then (b) with pragmas, the result : In 100 tries, occurence of 2: 84, occurence of 1 : 14, occurence of 0: 2. Which is unexpected as the 2 results are nearly identical. Which means pragmas or no pragmas the output has same behavior. Those who are Ada concurrency Gurus please shed some light on this topic. Alternative solutions with semaphores (if possible) is also invited. Further in my opinion for a critical process (that is what we do with Ada), with pragmas the result should be z = 2, 100% at all times, hence or otherwise this program should be termed as 85% critical !!!! (That should not be so with Ada)

    Read the article

  • .htaccess mod_rewrite issue

    - by Orhan Toy
    Almost in any project I work on, some issues with .htaccess occur. I usually just find the easiest solution and leave it because I don't have any knowledge or understanding for Apache, servers etc. But this time I thought I would ask you guys. This is the files and folders in my (simplified) setup: /modrewrite-test .htaccess /config /inc /lib /public_html .htaccess /cms /navigation index.php edit.php /pages index.php edit.php login.php page.php The "config", "inc" and "lib" folders are meant to be "hidden" from the root of the website. I try to accomplish this by making a .htaccess-file in the root that redirects the user to "public_html". The .htacess-file contains this: RewriteEngine On RewriteRule (.*) public_html/$1 This works perfect. If I type "http://localhost/modrewrite-test/login.php" in my browser, I end up in public_html/login.php which is my intention. So this works fine. The .htaccess-file in "public_html" contains this: RewriteEngine On # Root RewriteRule ^$ page.php [L] # Login RewriteRule ^(admin)|(login)\/?$ login.php [L] # Page (if not a file/directory) RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ page.php?url=$1 [L] The first rewrite just redirects me to public_html/page.php if I try to reach "http://localhost/modrewrite-test/". The next rewrite is just for the convenience of users trying to log in - so if they try to reach "http://localhost/modrewrite-test/admin" or "http://localhost/modrewrite-test/login" they will end up at the login.php-file. The third and last rewrite handles the rest of the requests. If I try to reach "http://localhost/modrewrite-test/bla/bla/bla" it will just redirect me to public_html/page.php (with the 'url' GET-variable set) instead of finding a folder called "la", containing a folder named "bla" and etc. All of these things work perfect but a minor issues occurs when I for instance try to reach "http://localhost/modrewrite-test/cms/navigation" without a slash at the end of the URL. When I try to reach that page the browser is somehow redirected to "http://localhost/modrewrite-test/public_html/cms/navigation/". The correct page is shown but why does it get redirected and add the "public_html" part in the URL? The desired behavior is that the URL stays intact and that the page public_html/cms/navigation/index.php is shown. The files and folders in the (simplified) can be found at http://highbars.com/modrewrite-test.zip

    Read the article

  • Adding a valuetype to IDL, compile and it fails with "No factory found"

    - by jim
    I can't figure out why the client keeps complaining about the not finding the factory method. I've tried the IDL with and without the "factory" keyword and that didn't change the behavior. The SDMGeoVT IDL matches other objects used (which run successfully). The SDMGeoVT classes generated match other generated classes in regards to inheritance and methods. The IDL is as follows; The idlj compiler runs w/o error. I implement the function on the server and I see the server code run and serialize the object over the wire (the server code runs fine). The client bombs with the following stack trace (the first couple of lines is from the jacORB library). I've created a small app just to compile and test the code (ArrayClient & ArrayServer). The base app (from the jacORB demo) works fine. I've tried using the base class OFBaseVT and a single object (SDMGeoVT vs the list return) and have the same issue. 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) org.omg.CORBA.MARSHAL: No factory found for: IDL:pl/SDMGeoVT:1.0 at org.jacorb.orb.CDRInputStream.read_untyped_value(CDRInputStream.java:2906) at org.jacorb.orb.CDRInputStream.read_typed_value(CDRInputStream.java:3082) at org.jacorb.orb.CDRInputStream.read_value(CDRInputStream.java:2679) at com.helloworld.pl.SDMGeoVTHelper.read(SDMGeoVTHelper.java:106) at com.helloworld.pl.SDMGeoVTListHelper.read(SDMGeoVTListHelper.java:51) at com.helloworld.pl._PLManagerIFStub.getSDMGeos(_PLManagerIFStub.java:28) at com.helloworld.ArrayClient.<init>(ArrayClient.java:40) at com.helloworld.ArrayClient.main(ArrayClient.java:125) valuetype SDMGeoVT : common::OFBaseVT{ private string sdmName; private string zip; private string atz; private long long primaryDeptId; private string deptName; factory instance(in string name,in string ZIP,in string ATZ,in long long primaryDeptId,in string deptName,in string name); string getZIP(); void setZIP(in string ZIP); string getATZ(); void setATZ(in string ATZ); long long getPrimaryDeptId(); void setPrimaryDeptId(in long long primaryDeptId); string getDeptName(); void setDeptName(in string deptName); }; typedef sequence<SDMGeoVT> SDMGeoVTList; interface PLManagerIF : PublicManagerIF { pl::SDMGeoVTList getSDMGeos(in framework::ITransactionHandle tHandle, in long long productionLocationId); };

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

  • DatagridView loses current edit on Background update

    - by yoni.s
    Here's my problem : I have a DataGridView bound to a BindingList of custom objects. A background thread is constantly updating a value of these objects. The udpates are showing correctly, and everything is fine except for one thing - If you try to edit a different field while the background-updated field is being updated, it loses the entered value. Here is a code sample that demonstrates this behavior: (for new form, drop a new DataGridView on:) using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; namespace WindowsFormsApplication2 { public partial class Form1 : Form { private BindingList<foo> flist; private Thread thrd; private BindingSource b; public Form1() { InitializeComponent(); flist = new BindingList<foo> { new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1} }; b = new BindingSource(); b.DataSource = flist; dataGridView1.DataSource = b; thrd = new Thread(new ThreadStart(updPRoc)); thrd.Start(); } private void upd() { flist.ToList().ForEach(f=>f.c++); } private void updPRoc() { while (true) { this.BeginInvoke(new MethodInvoker(upd)); Thread.Sleep(1000); } } } public class foo:INotifyPropertyChanged { private int _c; public int a { get; set; } public int b { get; set; } public int c { get {return _c;} set { _c = value; if (PropertyChanged!= null) PropertyChanged(this,new PropertyChangedEventArgs("c")); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion } } So, you edit column a or b, you will see that the column c update causes you to lose your entry. Any thoughts appreciated.

    Read the article

  • Creating a column of RadioButtons in Adobe Flex

    - by adnan
    I am having an un-predictable behavior of creating radio buttons in advancedDataGrid column using itemRenderer. Similar kind of problem has been reported at http://stackoverflow.com/questions/112036/creating-a-column-of-radiobuttons-in-adobe-flex. I tried to use the same procedure i.e. bind every radio button selectedValue and value attributes with the property specified in the property of the associated bean but still facing the problem. The button change values! The selected button becomes deselected, and unselected ones become selected. Here is the code of my advancedDataGrid: <mx:AdvancedDataGrid id="adDataGrid_rptdata" width="100%" height="100%" dragEnabled="false" sortableColumns="false" treeColumn="{action}" liveScrolling="false" displayItemsExpanded="true" > <mx:dataProvider> <mx:HierarchicalData source="{this.competenceCollection}" childrenField="competenceCriteria"/> </mx:dataProvider> <mx:columns> <mx:AdvancedDataGridColumn headerText="" id="action" dataField="criteriaName" /> <mx:AdvancedDataGridColumn headerText="Periode 1" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period1" value="{data}" selected="{data.period1}" group="{data.radioBtnGrpArray[0]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> <mx:AdvancedDataGridColumn headerText="Periode 2" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period2" value="{data}" selected="{data.period2}" group="{data.radioBtnGrpArray[1]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> <mx:AdvancedDataGridColumn headerText="Periode 3" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period3" value="{data}" selected="{data.period3}" group="{data.radioBtnGrpArray[2]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> </mx:columns> </mx:AdvancedDataGrid> Any work around is highly appreciated in this regard!

    Read the article

  • SBT run differences between scala and java?

    - by Eric Cartner
    I'm trying to follow the log4j2 configuration tutorials in a SBT 0.12.1 project. Here is my build.sbt: name := "Logging Test" version := "0.0" scalaVersion := "2.9.2" libraryDependencies ++= Seq( "org.apache.logging.log4j" % "log4j-api" % "2.0-beta3", "org.apache.logging.log4j" % "log4j-core" % "2.0-beta3" ) When I run the main() defined in src/main/scala/logtest/Foo.scala: package logtest import org.apache.logging.log4j.{Logger, LogManager} object Foo { private val logger = LogManager.getLogger(getClass()) def main(args: Array[String]) { logger.trace("Entering application.") val bar = new Bar() if (!bar.doIt()) logger.error("Didn't do it.") logger.trace("Exiting application.") } } I get the output I was expecting given that src/main/resources/log4j2.xml sets the root logging level to trace: [info] Running logtest.Foo 08:39:55.627 [run-main] TRACE logtest.Foo$ - Entering application. 08:39:55.630 [run-main] TRACE logtest.Bar - entry 08:39:55.630 [run-main] ERROR logtest.Bar - Did it again! 08:39:55.630 [run-main] TRACE logtest.Bar - exit with (false) 08:39:55.630 [run-main] ERROR logtest.Foo$ - Didn't do it. 08:39:55.630 [run-main] TRACE logtest.Foo$ - Exiting application. However, when I run the main() defined in src/main/java/logtest/LoggerTest.java: package logtest; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class LoggerTest { private static Logger logger = LogManager.getLogger(LoggerTest.class.getName()); public static void main(String[] args) { logger.trace("Entering application."); Bar bar = new Bar(); if (!bar.doIt()) logger.error("Didn't do it."); logger.trace("Exiting application."); } } I get the output: [info] Running logtest.LoggerTest ERROR StatusLogger Unable to locate a logging implementation, using SimpleLogger ERROR Bar Did it again! ERROR LoggerTest Didn't do it. From what I can tell, ERROR StatusLogger Unable to ... is usually a sign that log4j-core is not on my classpath. The lack of TRACE messages seems to indicate that my log4j2.xml settings aren't on the classpath either. Why should there be any difference in classpath if I'm running Foo.main versus LoggerTest.main? Or is there something else causing this behavior? Update I used SBT Assembly to build a fat jar of this project and specified logtest.LoggerTest to be the main class. Running it from the command line produced correct results: Eric-Cartners-iMac:target ecartner$ java -jar "Logging Test-assembly-0.0.jar" 10:52:23.220 [main] TRACE logtest.LoggerTest - Entering application. 10:52:23.221 [main] TRACE logtest.Bar - entry 10:52:23.221 [main] ERROR logtest.Bar - Did it again! 10:52:23.221 [main] TRACE logtest.Bar - exit with (false) 10:52:23.221 [main] ERROR logtest.LoggerTest - Didn't do it. 10:52:23.221 [main] TRACE logtest.LoggerTest - Exiting application.

    Read the article

  • Calling/selecting variables (float valued) with user input in Python

    - by Jonathan Straus
    I've been working on a computational physics project (plotting related rates of chemical reactants with respect to eachother to show oscillatory behavior) with a fair amount of success. However, one of my simulations involves more than two active oscillating agents (five, in fact) which would obviously be unsuitable for any single visual plot... My scheme was hence to have the user select which two reactants they wanted plotted on the x-axis and y-axis respectively. I tried (foolishly) to convert string input values into the respective variable names, but I guess I need a radically different approach if any exist? If it helps clarify any, here is part of my code: def coupledBrusselator(A, B, t_trial,display_x,display_y): t = 0 t_step = .01 X = 0 Y = 0 E = 0 U = 0 V = 0 dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) dE = -(E)*(U) - (X) dU = (U**2)*(V) -(E+1)*(U) - (B)*(X) dV = (E)*(U) - (U**2)*(V) array_t = [0] array_X = [0] array_Y = [0] array_U = [0] array_V = [0] while t <= t_trial: X_1 = X + (dX)*(t_step/2) Y_1 = Y + (dY)*(t_step/2) E_1 = E + (dE)*(t_step/2) U_1 = U + (dU)*(t_step/2) V_1 = V + (dV)*(t_step/2) dX_1 = (A) - (B+1)*(X_1) + (X_1**2)*(Y_1) dY_1 = (B)*(X_1) - (X_1**2)*(Y_1) dE_1 = -(E_1)*(U_1) - (X_1) dU_1 = (U_1**2)*(V_1) -(E_1+1)*(U_1) - (B)*(X_1) dV_1 = (E_1)*(U_1) - (U_1**2)*(V_1) X_2 = X + (dX_1)*(t_step/2) Y_2 = Y + (dY_1)*(t_step/2) E_2 = E + (dE_1)*(t_step/2) U_2 = U + (dU_1)*(t_step/2) V_2 = V + (dV_1)*(t_step/2) dX_2 = (A) - (B+1)*(X_2) + (X_2**2)*(Y_2) dY_2 = (B)*(X_2) - (X_2**2)*(Y_2) dE_2 = -(E_2)*(U_2) - (X_2) dU_2 = (U_2**2)*(V_2) -(E_2+1)*(U_2) - (B)*(X_2) dV_2 = (E_2)*(U_2) - (U_2**2)*(V_2) X_3 = X + (dX_2)*(t_step) Y_3 = Y + (dY_2)*(t_step) E_3 = E + (dE_2)*(t_step) U_3 = U + (dU_2)*(t_step) V_3 = V + (dV_2)*(t_step) dX_3 = (A) - (B+1)*(X_3) + (X_3**2)*(Y_3) dY_3 = (B)*(X_3) - (X_3**2)*(Y_3) dE_3 = -(E_3)*(U_3) - (X_3) dU_3 = (U_3**2)*(V_3) -(E_3+1)*(U_3) - (B)*(X_3) dV_3 = (E_3)*(U_3) - (U_3**2)*(V_3) X = X + ((dX + 2*dX_1 + 2*dX_2 + dX_3)/6) * t_step Y = Y + ((dX + 2*dY_1 + 2*dY_2 + dY_3)/6) * t_step E = E + ((dE + 2*dE_1 + 2*dE_2 + dE_3)/6) * t_step U = U + ((dU + 2*dU_1 + 2*dY_2 + dE_3)/6) * t_step V = V + ((dV + 2*dV_1 + 2*dV_2 + dE_3)/6) * t_step dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) t_step = .01 / (1 + dX**2 + dY**2) ** .5 t = t + t_step array_X.append(X) array_Y.append(Y) array_E.append(E) array_U.append(U) array_V.append(V) array_t.append(t) where previously display_x = raw_input("Choose catalyst you wish to analyze in the phase/field diagrams (X, Y, E, U, or V) ") display_y = raw_input("Choose one other catalyst from list you wish to include in phase/field diagrams ") coupledBrusselator(A, B, t_trial, display_x, display_y) Thanks!

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >