Search Results

Search found 6542 results on 262 pages for 'undocumented behavior'.

Page 240/262 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Subclassing a window from a thread in c#

    - by user258651
    I'm creating a thread that looks for a window. When it finds the window, it overrides its windowproc, and handles WM_COMMAND and WM_CLOSE. Here's the code that looks for the window and subclasses it: public void DetectFileDialogProc() { Window fileDialog = null; // try to find the dialog twice, with a delay of 500 ms each time for (int attempts = 0; fileDialog == null && attempts < 2; attempts++) { // FindDialogs enumerates all windows of class #32770 via an EnumWindowProc foreach (Window wnd in FindDialogs(500)) { IntPtr parent = NativeMethods.User32.GetParent(wnd.Handle); if (parent != IntPtr.Zero) { // we're looking for a dialog whose parent is a dialog as well Window parentWindow = new Window(parent); if (parentWindow.ClassName == NativeMethods.SystemWindowClasses.Dialog) { fileDialog = wnd; break; } } } } // if we found the dialog if (fileDialog != null) { OldWinProc = NativeMethods.User32.GetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC); NativeMethods.User32.SetWindowLong(fileDialog.Handle, NativeMethods.GWL_WNDPROC, Marshal.GetFunctionPointerForDelegate(new WindowProc(WndProc)).ToInt32()); } } And the windowproc: public IntPtr WndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam) { lock (this) { if (!handled) { if (msg == NativeMethods.WM_COMMAND || msg == NativeMethods.WM_CLOSE) { // adding to a list. i never access the window via the hwnd from this list, i just treat it as a number _addDescriptor(hWnd); handled = true; } } } return NativeMethods.User32.CallWindowProc(OldWinProc, hWnd, msg, wParam, lParam); } This all works well under normal conditions. But I am seeing two instances of bad behavior in order of badness: If I do not close the dialog within a minute or so, the app crashes. Is this because the thread is getting garbage collected? This would kind of make sense, as far as GC can tell the thread is done? If this is the case, (and I don't know that it is), how can I make the thread stay around as long as the dialog is around? If I immediately close the dialog with the 'X' button (WM_CLOSE) the app crashes. I believe its crashing in the windowproc, but I can't get a breakpoint in there. I'm getting an AccessViolationException, The exception says "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Its a race condition, but of what I don't know. FYI, I had been reseting the old windowproc once I processed the commands, but that was crashing even more often! Any ideas on how I can solve these issues?

    Read the article

  • Odd ActiveRecord model dynamic initialization bug in production

    - by qfinder
    I've got an ActiveRecord (2.3.5) model that occasionally exhibits incorrect behavior that appears to be related to a problem in its dynamic initialization. Here's the code: class Widget < ActiveRecord::Base extend ActiveSupport::Memoizable serialize :settings VALID_SETTINGS = %w(show_on_sale show_upcoming show_current show_past) VALID_SETTINGS.each do |setting| class_eval %{ def #{setting}=(val); self.settings[:#{setting}] = (val == "1"); end def #{setting}; self.settings[:#{setting}]; end } end def initialize_settings self.settings ||= { :show_on_sale => true, :show_upcoming => true } end after_initialize :initialize_settings # All the other stuff the model does end The idea was to use a single record field (settings) to persist a bunch of configuration data for this object, but allow all the settings to seamlessly work with form helpers and the like. (Why this approach makes sense here is a little out of scope, but let's assume that it does.) Net-net, Widget should end up with instance methods (eg #show_on_sale= #show_on_sale) for all the entires in the VALID_SETTINGS array. Any default values should be specified in initialize_settings. And indeed this works, mostly. In dev and staging, no problems at all. But in production, the app sometimes ends up in a state where a) any writes to the dynamically generated setters fail and b) none of the default values appear to be set - although my leading theory is that the dynamically generated reader methods are just broken. The code, db, and environment is otherwise identical between the three. A typical error message / backtrace on the fail looks like: IndexError: index 141145 out of string (eval):2:in []=' (eval):2:inshow_on_sale=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:in send' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:in each' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2634:in `update_attributes!' ...(then controller and all the way down) Ideas or theories as to what might be going on? My leading theory is that something is going wrong in instance initialization wherein the class instance variable settings is ending up as a string rather than a hash. This explains both the above setter failure (:show_on_sale is being used to index into the string) and the fact that getters don't work (an out of bounds [] call on a string just returns nil). But then how and why might settings occasionally end up as a string rather than hash?

    Read the article

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • How is IObservable<double>.Average supposed to work?

    - by Dan Tao
    Update Looks like Jon Skeet was right (big surprise!) and the issue was with my assumption about the Average extension providing a continuous average (it doesn't). For the behavior I'm after, I wrote a simple ContinuousAverage extension method, the implementation of which I am including here for the benefit of others who may want something similar: public static class ObservableExtensions { private class ContinuousAverager { private double _mean; private long _count; public ContinuousAverager() { _mean = 0.0; _count = 0L; } // undecided whether this method needs to be made thread-safe or not // seems that ought to be the responsibility of the IObservable (?) public double Add(double value) { double delta = value - _mean; _mean += (delta / (double)(++_count)); return _mean; } } public static IObservable<double> ContinousAverage(this IObservable<double> source) { var averager = new ContinuousAverager(); return source.Select(x => averager.Add(x)); } } I'm thinking of going ahead and doing something like the above for the other obvious candidates as well -- so, ContinuousCount, ContinuousSum, ContinuousMin, ContinuousMax ... perhaps ContinuousVariance and ContinuousStandardDeviation as well? Any thoughts on that? Original Question I use Rx Extensions a little bit here and there, and feel I've got the basic ideas down. Now here's something odd: I was under the impression that if I wrote this: var ticks = Observable.FromEvent<QuoteEventArgs>(MarketDataProvider, "MarketTick"); var bids = ticks .Where(e => e.EventArgs.Quote.HasBid) .Select(e => e.EventArgs.Quote.Bid); var bidsSubscription = bids.Subscribe( b => Console.WriteLine("Bid: {0}", b) ); var avgOfBids = bids.Average(); var avgOfBidsSubscription = avgOfBids.Subscribe( b => Console.WriteLine("Avg Bid: {0}", b) ); I would get two IObservable<double> objects (bids and avgOfBids); one would basically be a stream of all the market bids from my MarketDataProvider, the other would be a stream of the average of these bids. So something like this: Bid Avg Bid 1 1 2 1.5 1 1.33 2 1.5 It seems that my avgOfBids object isn't doing anything. What am I missing? I think I've probably misunderstood what Average is actually supposed to do. (This also seems to be the case for all of the aggregate-like extension methods on IObservable<T> -- e.g., Max, Count, etc.)

    Read the article

  • Symfony2 - PdfBundle not working

    - by ElPiter
    Using Symfony2 and PdfBundle to generate dynamically PDF files, I don't get to generate the files indeed. Following documentation instructions, I have set up all the bundle thing: autoload.php: 'Ps' => __DIR__.'/../vendor/bundles', 'PHPPdf' => __DIR__.'/../vendor/PHPPdf/lib', 'Imagine' => array(__DIR__.'/../vendor/PHPPdf/lib', __DIR__.'/../vendor/PHPPdf/lib/vendor/Imagine/lib'), 'Zend' => __DIR__.'/../vendor/PHPPdf/lib/vendor/Zend/library', 'ZendPdf' => __DIR__.'/../vendor/PHPPdf/lib/vendor/ZendPdf/library', AppKernel.php: ... new Ps\PdfBundle\PsPdfBundle(), ... I guess all the setting up is correctly configured, as I am not getting any "library not found" nor anything on that way... So, after all that, I am doing this in the controller: ... use Ps\PdfBundle\Annotation\Pdf; ... /** * @Pdf() * @Route ("/pdf", name="_pdf") * @Template() */ public function generateInvoicePDFAction($name = 'Pedro') { return $this->render('AcmeStoreBundle:Shop:generateInvoice.pdf.twig', array( 'name' => $name, )); } And having this twig file: <pdf> <dynamic-page> Hello {{ name }}! </dynamic-page> </pdf> Well. Somehow, what I just get in my page is just the normal html generated as if it was a normal Response rendering. The Pdf() annotation is supposed to give the "special" behavior of creating the PDF file instead of rendering normal HTML. So, having the above code, when I request the route http://www.mysite.com/*...*/pdf, all what I get is the following HTML rendered: <pdf> <dynamic-page> Hello Pedro! </dynamic-page> </pdf> (so a blank HTML page with just the words Hello Pedro! on it. Any clue? Am I doing anything wrong? Is it mandatory to have the alternative *.html.twig apart from the *.pdf.twig version? I don't think so... :(

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • iPhone: value of selectedIndex for tab should be consistent, but isn't

    - by Janine
    This should be so simple... but something screwy is happening. My setup looks like this: MainViewController Tab Bar Controller 4 tabs, each of which loads WebViewController My AppDelegate contains an ivar, tabBarController, which is connected to the tab bar controller (this was all set up in Interface Builder). The leftmost tab is marked "selected" in IB. Within the viewWillAppear method in WebViewController, I need to know which tab was just selected so I can load the correct URL. I do this by switching on appDelegate.tabBarController.selectedIndex. When the app first runs and the leftmost tab is selected, selectedIndex is a large garbage value. After that, I get values from 0 to 3, which is as it should be, but they are in random order. Not only that, but each tab I touch reports a different value each time. This app is extremely simple right now and I can't imagine what I could have done to make things go this wrong. Has anyone seen (and hopefully solved) this behavior? Update: we have a request for code. There's not much to see. The tab bar controller gets loaded in applicationDidFinishLaunching: [self.mainViewController view]; //force nib to load [self.window addSubview:self.mainViewController.tabBarController.view] There is currently no code whatsoever in MainViewController.m other than the synthesize and release for tabBarController. From WebVewController.m: - (void)viewWillAppear:(BOOL)_animation { [super viewWillAppear:_animation]; NSURL *url; switch([S_UIDelegate mainViewController].tabBarController.selectedIndex) { case 0: url = [NSURL URLWithString:@"http://www.cnn.com"]; break; case 1: url = [NSURL URLWithString:@"http://www.facebook.com"]; break; case 2: url = [NSURL URLWithString:@"http://www.twitter.com"]; break; case 3: url = [NSURL URLWithString:@"http://www.google.com"]; break; default: url = [NSURL URLWithString:@"http://www.msnbc.com"]; } NSURLRequest *request = [NSURLRequest requestWithURL:url]; [webView loadRequest:request]; } This is where I'm seeing the random values.

    Read the article

  • One entityManger finds entity , the other does not.

    - by Pitelk
    Hi all, I have a very strange behavior in my program. I have 2 classes (class LogIn and CreateGame) where i have injected an EntityManager in each using the annotation @PersistenceContext(unitName="myUnitPU") EntityManager entitymanger; In some point i remove an object called "user" from the database using entitymanger.remove(user) from a method in LogIn class. The business logic is that a user can host and join games ( in the same time) so removing the user all the entries in database about the games the user has created are removed and all the entries showing in which games the user has joined are removed also. After that, i call another function which checks if the user exists using a method in the LogIn class entitymanager.find(user) which surprisingly enough, finds the user. After that I call a method in CreateGame class which tries to find the user by using again entitymanger.find(user) the entitymanger in that class fails to find the user (which is the expected result as the user is removed and it's not in the database) So the question is : Why the entitymanager in one class finds the user (which is wrong) where the other doesn't find it? Does anyone has ever the same problem? PS : This "bug" occurs when the user has hosted a game which is joined by another user (lets call him Buser) and the Buser has made a game which is joined by the current user. GAME | HOST | CLIENTS game1 | user | userB game2 | userB | user where in this case by removing the user, the game1 is deleted and the user is removed from game2 so the result is GAME | HOST | CLIENTS game2 | userB | PS2 : The Beans are EJB3.0. The methods are called from a delegate class. The beans in the delegate class are instantiated using the InitialContext.lookup() method. Note that for logging in ,creating , joining games the appropriate delegate class calls the correspondent EJB which does the transactions. In the case of logOut, the delegate calls an EJB to logout the user but becuase other stuff must be done (as said above) this EJB calls other EJB (again using lookup() ) which has methods like removegame(), removeUserFromGame() etc. After those methods are executed the user is then logged out. Maybe it has something to do with the fact the the first entity manager is called by a delegate but the second from inside an EJb and thats why the one entitymanger can see the non-existent user while the other cannot? Also all the methods have TRANSACTIONTYPE.REQUIRED Thank you in advance

    Read the article

  • CakePHP Multiple Nested Joins

    - by Paul
    I have an App in which several of the models are linked by hasMany/belongsTo associations. So for instance, A hasMany B, B hasMany C, C hasMany D, and D hasMany E. Also, E belongs to D, D belongs to C, C belongs to B, and B belongs to A. Using the Containable behavior has been great for controlling the amount of information comes back with each query, but I seem to be having a problem when trying to get data from table A while using a condition that involves table D. For instance, here is an example of my 'A' model: class A extends AppModel { var $name = 'A'; var $hasMany = array( 'B' => array('dependent' => true) ); function findDependentOnE($condition) { return $this->find('all', array( 'contain' => array( 'B' => array( 'C' => array( 'D' => array( 'E' => array( 'conditions' => array( 'E.myfield' => $some_value ) ) ) ) ) ) )); } } This still gives me back all the records in 'A', and if it's related 'E' records don't satisfy the condition, then I just get this: Array( [0] => array( [A] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [B] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [C] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [D] => array( [field1] => // stuff [field2] => // more stuff // ...etc ), [E] => array( // empty if 'E.myfield' != $some_value' ) ), [1] => array( // ...etc ) ) When If 'E.myfield' != $some_value, I don't want the record returned at all. I hope this expresses my problem clearly enough... Basically, I want the following query, but in a database-agnostic/CakePHP-y kind of way: SELECT * FROM A INNER JOIN (B INNER JOIN (C INNER JOIN (D INNER JOIN E ON D.id=E.d_id) ON C.id=D.c_id) ON B.id=C.b_id) ON A.id=B.a_id WHERE E.myfield = $some_value

    Read the article

  • Problems using wxWidgets (wxMSW) within multiple DLL instances

    Preface I'm developing VST-plugins which are DLL-based software modules and loaded by VST-supporting host applications. To open a VST-plugin the host applications loads the VST-DLL and calls an appropriate function of the plugin while providing a native window handle, which the plugin can use to draw it's GUI. I managed to port my original VSTGUI code to the wxWidgets-Framework and now all my plugins run under wxMSW and wxMac but I still have problems under wxMSW to find a correct way to open and close the plugins and I am not sure if this is a wxMSW-only issue. Problem If I use any VST-host application I can open and close multiple instances of one of my VST-plugins without any problems. As soon as I open another of my VST-plugins besides my first VST-plugin and then close all instances of my first VST-plugin the application crashes after a short amount of time within the wxEventHandlerr::ProcessEvent function telling me that the wxTheApp object isn't valid any longer during execution of wxTheApp-FilterEvent (see below). So it seems to be that the wxTheApp objects was deleted after closing all instances of the first plugin and is no longer available for the second plugin. bool wxEvtHandler::ProcessEvent(wxEvent& event) { // allow the application to hook into event processing if ( wxTheApp ) { int rc = wxTheApp->FilterEvent(event); if ( rc != -1 ) { wxASSERT_MSG( rc == 1 || rc == 0, _T("unexpected wxApp::FilterEvent return value") ); return rc != 0; } //else: proceed normally } .... } Preconditions 1.) All my VST-plugins a dynamically linked against the C-Runtime and wxWidgets libraries. With regard to the wxWidgets forum this seemed to be the best way to run multiple instances of the software side by side. 2.) The DllMain of each VST-Plugin is defined as follows: // WXW #include "wx/app.h" #include "wx/defs.h" #include "wx/gdicmn.h" #include "wx/image.h" #ifdef __WXMSW__ #include <windows.h> #include "wx/msw/winundef.h" BOOL APIENTRY DllMain ( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: { wxInitialize(); ::wxInitAllImageHandlers(); break; } case DLL_THREAD_ATTACH: break; case DLL_THREAD_DETACH: break; case DLL_PROCESS_DETACH: wxUninitialize(); break; } return TRUE; } #endif // __WXMSW__ class Application : public wxApp {}; IMPLEMENT_APP_NO_MAIN(Application) Question How can I prevent this behavior respectively how can I properly handle the wxTheApp object if I have multiple instances of different VST-plugins (DLL-modules), which are dynamically linked against the C-Runtime and wxWidgets libraries? Best reagards, Steffen

    Read the article

  • INSERT OR IGNORE in a trigger

    - by dan04
    I have a database (for tracking email statistics) that has grown to hundreds of megabytes, and I've been looking for ways to reduce it. It seems that the main reason for the large file size is that the same strings tend to be repeated in thousands of rows. To avoid this problem, I plan to create another table for a string pool, like so: CREATE TABLE AddressLookup ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Address TEXT UNIQUE ); CREATE TABLE EmailInfo ( MessageID INTEGER PRIMARY KEY AUTOINCREMENT, ToAddrRef INTEGER REFERENCES AddressLookup(ID), FromAddrRef INTEGER REFERENCES AddressLookup(ID) /* Additional columns omitted for brevity. */ ); And for convenience, a view to join these tables: CREATE VIEW EmailView AS SELECT MessageID, A1.Address AS ToAddr, A2.Address AS FromAddr FROM EmailInfo LEFT JOIN AddressLookup A1 ON (ToAddrRef = A1.ID) LEFT JOIN AddressLookup A2 ON (FromAddrRef = A2.ID); In order to be able to use this view as if it were a regular table, I've made some triggers: CREATE TRIGGER trg_id_EmailView INSTEAD OF DELETE ON EmailView BEGIN DELETE FROM EmailInfo WHERE MessageID = OLD.MessageID; END; CREATE TRIGGER trg_ii_EmailView INSTEAD OF INSERT ON EmailView BEGIN INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.ToAddr); INSERT OR IGNORE INTO AddressLookup(Address) VALUES (NEW.FromAddr); INSERT INTO EmailInfo SELECT NEW.MessageID, A1.ID, A2.ID FROM AddressLookup A1, AddressLookup A2 WHERE A1.Address = NEW.ToAddr AND A2.Address = NEW.FromAddr; END; CREATE TRIGGER trg_iu_EmailView INSTEAD OF UPDATE ON EmailView BEGIN UPDATE EmailInfo SET MessageID = NEW.MessageID WHERE MessageID = OLD.MessageID; REPLACE INTO EmailView SELECT NEW.MessageID, NEW.ToAddr, NEW.FromAddr; END; The problem After: INSERT OR REPLACE INTO EmailView VALUES (1, '[email protected]', '[email protected]'); INSERT OR REPLACE INTO EmailView VALUES (2, '[email protected]', '[email protected]'); The updated rows contain: MessageID ToAddr FromAddr --------- ------ -------- 1 NULL [email protected] 2 [email protected] [email protected] There's a NULL that shouldn't be there. The corresponding cell in the EmailInfo table contains an orphaned ToAddrRef value. If you do the INSERTs one at a time, you'll see that Alice's ID in the AddressLookup table changes! It appears that this behavior is documented: An ON CONFLICT clause may be specified as part of an UPDATE or INSERT action within the body of the trigger. However if an ON CONFLICT clause is specified as part of the statement causing the trigger to fire, then conflict handling policy of the outer statement is used instead. So the "REPLACE" in the top-level "INSERT OR REPLACE" statement is overriding the critical "INSERT OR IGNORE" in the trigger program. Is there a way I can make it work the way that I wanted?

    Read the article

  • PERL newbie : get a proper minimal debug_mode solution

    - by Michael Mao
    Hi all: I am learning PERL in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debue_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I wanna a real minimal solution that doesn't depend on any other module than the "default" And I cannot have any control on the environment where this script will be executed... Many thanks to the suggestions in advance.

    Read the article

  • .htaccess mod_rewrite issue

    - by Orhan Toy
    Almost in any project I work on, some issues with .htaccess occur. I usually just find the easiest solution and leave it because I don't have any knowledge or understanding for Apache, servers etc. But this time I thought I would ask you guys. This is the files and folders in my (simplified) setup: /modrewrite-test .htaccess /config /inc /lib /public_html .htaccess /cms /navigation index.php edit.php /pages index.php edit.php login.php page.php The "config", "inc" and "lib" folders are meant to be "hidden" from the root of the website. I try to accomplish this by making a .htaccess-file in the root that redirects the user to "public_html". The .htacess-file contains this: RewriteEngine On RewriteRule (.*) public_html/$1 This works perfect. If I type "http://localhost/modrewrite-test/login.php" in my browser, I end up in public_html/login.php which is my intention. So this works fine. The .htaccess-file in "public_html" contains this: RewriteEngine On # Root RewriteRule ^$ page.php [L] # Login RewriteRule ^(admin)|(login)\/?$ login.php [L] # Page (if not a file/directory) RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ page.php?url=$1 [L] The first rewrite just redirects me to public_html/page.php if I try to reach "http://localhost/modrewrite-test/". The next rewrite is just for the convenience of users trying to log in - so if they try to reach "http://localhost/modrewrite-test/admin" or "http://localhost/modrewrite-test/login" they will end up at the login.php-file. The third and last rewrite handles the rest of the requests. If I try to reach "http://localhost/modrewrite-test/bla/bla/bla" it will just redirect me to public_html/page.php (with the 'url' GET-variable set) instead of finding a folder called "la", containing a folder named "bla" and etc. All of these things work perfect but a minor issues occurs when I for instance try to reach "http://localhost/modrewrite-test/cms/navigation" without a slash at the end of the URL. When I try to reach that page the browser is somehow redirected to "http://localhost/modrewrite-test/public_html/cms/navigation/". The correct page is shown but why does it get redirected and add the "public_html" part in the URL? The desired behavior is that the URL stays intact and that the page public_html/cms/navigation/index.php is shown. The files and folders in the (simplified) can be found at http://highbars.com/modrewrite-test.zip

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • What is NSString in struct?

    - by 4thSpace
    I've defined a struct and want to assign one of its values to a NSMutableDictionary. When I try, I get a EXC_BAD_ACCESS. Here is the code: //in .h file typedef struct { NSString *valueOne; NSString *valueTwo; } myStruct; myStruct aStruct; //in .m file - (void)viewDidLoad { [super viewDidLoad]; aStruct.valueOne = @"firstValue"; } //at some later time [myDictionary setValue:aStruct.valueOne forKey:@"key1"]; //dies here with EXC_BAD_ACCESS This is the output in debugger console: (gdb) p aStruct.valueOne $1 = (NSString *) 0xf41850 Is there a way to tell what the value of aStruct.valueOne is? Since it is an NSString, why does the dictionary have such a problem with it? ------------- EDIT ------------- This edit is based on some comments below. The problem appears to be in the struct memory allocation. I have no issues assigning the struct value to the dictionary in viewDidLoad, as mentioned in one of the comments. The problem is that later on, I run into an issue with the struct. Just before the error, I do: po aStruct.oneValue Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000000 0x9895cedb in objc_msgSend () The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (_NSPrintForDebugger) will be abandoned. This occurs just before the EXC_BAD_ACCESS: NSDateFormatter *formatter = [[NSDateFormatter alloc] init]; [formatter setDateFormat:@"MM-dd-yy_HH-mm-ss-A"]; NSString *date = [formatter stringFromDate:[NSDate date]]; [formatter release]; aStruct.valueOne = date; So the memory issue is most likely in my releasing of formatter. The date var has no retain. Should I instead be doing NSString *date = [[formatter stringFromDate:[NSDate date]] retain]; Which does work but then I'm left with a memory leak.

    Read the article

  • Issues in Ada Concurrency

    - by Arkapravo
    Hi I need some help and also some insight. This is a program in Ada-2005 which has 3 tasks. The output is 'z'. If the 3 tasks do not happen in the order of their placement in the program then output can vary from z = 2, z = 1 to z = 0 ( That is easy to see in the program, mutual exclusion is attempted to make sure output is z = 2). WITH Ada.Text_IO; USE Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Integer_Text_IO; WITH System; USE System; procedure xyz is x : Integer := 0; y : Integer := 0; z : Integer := 0; task task1 is pragma Priority(System.Default_Priority + 3); end task1; task task2 is pragma Priority(System.Default_Priority + 2); end task2; task task3 is pragma Priority(System.Default_Priority + 1); end task3; task body task1 is begin x := x + 1; end task1; task body task2 is begin y := x + y; end task2; task body task3 is begin z := x + y + z; end task3; begin Put(" z = "); Put(z); end xyz; I first tried this program (a) without pragmas, the result : In 100 tries, occurence of 2: 86, occurence of 1: 10, occurence of 0: 4. Then (b) with pragmas, the result : In 100 tries, occurence of 2: 84, occurence of 1 : 14, occurence of 0: 2. Which is unexpected as the 2 results are nearly identical. Which means pragmas or no pragmas the output has same behavior. Those who are Ada concurrency Gurus please shed some light on this topic. Alternative solutions with semaphores (if possible) is also invited. Further in my opinion for a critical process (that is what we do with Ada), with pragmas the result should be z = 2, 100% at all times, hence or otherwise this program should be termed as 85% critical !!!! (That should not be so with Ada)

    Read the article

  • .NET WinForms INotifyPropertyChanged updates all bindings when one is changed. Better way?

    - by Dave Welling
    In a windows forms application, a property change that triggers INotifyPropertyChanged, will result in the form reading EVERY property from my bound object, not just the property changed. (See example code below) This seems absurdly wasteful since the interface requires the name of the changing property. It is causing a lot of clocking in my app because some of the property getters require calculations to be performed. I'll likely need to implement some sort of logic in my getters to discard the unnecessary reads if there is no better way to do this. Am I missing something? Is there a better way? Don't say to use a different presentation technology please -- I am doing this on Windows Mobile (although the behavior happens on the full framework as well). Here's some toy code to demonstrate the problem. Clicking the button will result in BOTH textboxes being populated even though one property has changed. using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace Example { public class ExView : Form { private Presenter _presenter = new Presenter(); public ExView() { this.MinimizeBox = false; TextBox txt1 = new TextBox(); txt1.Parent = this; txt1.Location = new Point(1, 1); txt1.Width = this.ClientSize.Width - 10; txt1.DataBindings.Add("Text", _presenter, "SomeText1"); TextBox txt2 = new TextBox(); txt2.Parent = this; txt2.Location = new Point(1, 40); txt2.Width = this.ClientSize.Width - 10; txt2.DataBindings.Add("Text", _presenter, "SomeText2"); Button but = new Button(); but.Parent = this; but.Location = new Point(1, 80); but.Click +=new EventHandler(but_Click); } void but_Click(object sender, EventArgs e) { _presenter.SomeText1 = "some text 1"; } } public class Presenter : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private string _SomeText1 = string.Empty; public string SomeText1 { get { return _SomeText1; } set { _SomeText1 = value; _SomeText2 = value; // <-- To demonstrate that both properties are read OnPropertyChanged("SomeText1"); } } private string _SomeText2 = string.Empty; public string SomeText2 { get { return _SomeText2; } set { _SomeText2 = value; OnPropertyChanged("SomeText2"); } } private void OnPropertyChanged(string PropertyName) { PropertyChangedEventHandler temp = PropertyChanged; if (temp != null) { temp(this, new PropertyChangedEventArgs(PropertyName)); } } } }

    Read the article

  • Creating a column of RadioButtons in Adobe Flex

    - by adnan
    I am having an un-predictable behavior of creating radio buttons in advancedDataGrid column using itemRenderer. Similar kind of problem has been reported at http://stackoverflow.com/questions/112036/creating-a-column-of-radiobuttons-in-adobe-flex. I tried to use the same procedure i.e. bind every radio button selectedValue and value attributes with the property specified in the property of the associated bean but still facing the problem. The button change values! The selected button becomes deselected, and unselected ones become selected. Here is the code of my advancedDataGrid: <mx:AdvancedDataGrid id="adDataGrid_rptdata" width="100%" height="100%" dragEnabled="false" sortableColumns="false" treeColumn="{action}" liveScrolling="false" displayItemsExpanded="true" > <mx:dataProvider> <mx:HierarchicalData source="{this.competenceCollection}" childrenField="competenceCriteria"/> </mx:dataProvider> <mx:columns> <mx:AdvancedDataGridColumn headerText="" id="action" dataField="criteriaName" /> <mx:AdvancedDataGridColumn headerText="Periode 1" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period1" value="{data}" selected="{data.period1}" group="{data.radioBtnGrpArray[0]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> <mx:AdvancedDataGridColumn headerText="Periode 2" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period2" value="{data}" selected="{data.period2}" group="{data.radioBtnGrpArray[1]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> <mx:AdvancedDataGridColumn headerText="Periode 3" dataField="" width="200"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="center" width="100%" verticalAlign="middle"> <mx:RadioButton name="period3" value="{data}" selected="{data.period3}" group="{data.radioBtnGrpArray[2]}" visible="{data.showRadioButton}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:AdvancedDataGridColumn> </mx:columns> </mx:AdvancedDataGrid> Any work around is highly appreciated in this regard!

    Read the article

  • Gap appears between navigation bar and view after rotating & tab switching

    - by Bogatyr
    My iphone application is showing strange behavior when rotating: a gap appears between the navigation title and content view inside a tab bar view (details on how to reproduce are below). I've created a tiny test case that exhibits the same problem: a custom root UIViewController, which creates and displays a UITabBarController programmatically, which has two tabs: 1) plain UIViewController, and 2) UINavigationController created programmatically with a single plain UIViewController content view. The complete code for the application is in the root controller's viewDidLoad (every "*VC" class is a totally vanilla UIViewController subclass with XIB for user interface from XCode, with only the view background color changed to clearly identify each view, nothing else). Here's the viewDidLoad code, and the shouldAutorotateToInterfaceOrientation code, this code is the entire application basically: - (void)viewDidLoad { [super viewDidLoad]; FirstVC *fvc = [[FirstVC alloc] initWithNibName:@"FirstVC" bundle:nil]; NavContentsVC *ncvc = [[NavContentsVC alloc] initWithNibName:@"NavContentsVC" bundle:nil]; UINavigationController *svc = [[UINavigationController alloc] initWithRootViewController:ncvc]; NSMutableArray *localControllersArray = [[NSMutableArray alloc] initWithCapacity:2]; [localControllersArray addObject:fvc]; [localControllersArray addObject:svc]; fvc.title = @"FirstVC-Title"; ncvc.title = @"NavContents-Title"; UITabBarController *tbc = [[UITabBarController alloc] init]; tbc.view.frame = CGRectMake(0, 0, 320, 460); [tbc setViewControllers:localControllersArray]; [self.view addSubview:tbc.view]; [localControllersArray release]; [ncvc release]; [svc release]; [fvc release]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } Here's how to reproduce the problem: 1) start application 2) rotate device (happens in simulator, too) to landscape (UITabBar properly rotates) 3) click on tab 2 4) rotate device to portrait -- notice gap of root view controller's background color of about 10 pixels high beneath the Navigation title bar and the Navigation content view. 5) click tab 1 6) click tab 2 And the gap is gone! From my real application, I see that the gap remains during all VC push and pops while the NavigationController tab is active. Switching away to a different tab and back to the Nav tab clears up the gap. What am I doing wrong? I'm running on SDK 3.1.3, this happens both on the simulator and on the device. Except for this particular sequence, everything seems to work fine. Help!

    Read the article

  • Calling an Excel Add-In method from C# application or vice versa

    - by Jude
    I have an Excel VBA add-in with a public method in a bas file. This method currently creates a VB6 COM object, which exists in a running VB6 exe/vbp. The VB6 app loads in data and then the Excel add-in method can call methods on the VB6 COM object to load the data into an existing Excel xls. This is all currently working. We have since converted our VB6 app to C#. My question is: What is the best/easiest way to mimic this behavior with the C#/.NET app? I'm thinking I may not be able to pull the data from the .NET app into Excel from the add-in method since the .Net app needs to be running with data loaded (so no using a stand-alone C# class library). Maybe we can, instead, push the data from .NET to Excel by accessing the VBA add-in method from the C# code? The following is the existing VBA method accessing the VB6 app: Public Sub UpdateInDataFromApp() Dim wkbInData As Workbook Dim oFPW As Object Dim nMaxCols As Integer Dim nMaxRows As Integer Dim j As Integer Dim sName As String Dim nCol As Integer Dim nRow As Integer Dim sheetCnt As Integer Dim nDepth As Integer Dim sPath As String Dim vData As Variant Dim SheetRange As Range Set wkbInData = wkbOpen("InData.xls") sPath = g_sPathXLSfiles & "\" 'Note: the following will bring up fpw app if not already running Set oFPW = CreateObject("FPW.CProfilesData") If oFPW Is Nothing Then MsgBox "Unable to reference " & sApp Else . . . sheetCnt = wkbInData.Sheets.Count 'get number of sheets in indata workbook For j = 2 To sheetCnt 'set counter to loop over all sheets except the first one which is not input data fields With wkbInData.Worksheets(j) Set SheetRange = .UsedRange End With With SheetRange nMaxRows = .Rows.Count 'get range of sheet(j) nMaxCols = .Columns.Count 'get range of sheet(j) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearContents 'Clears data from data range (51 Columns) Range(.Cells(2, 2), .Cells(nMaxRows, nMaxCols)).ClearComments End With With oFPW 'vb6 object For nRow = 2 To nMaxRows ' loop through rows sName = SheetRange.Cells(nRow, 1) 'Field name vData = .vntGetSymbol(sName, 0) 'Check if vb6 app identifies the name nDepth = .GetInputTableDepth(sName) 'Get number of data items for this field name from vb6 app nMaxCols = nDepth + 2 'nDepth=0, is single data item For nCol = 2 To nMaxCols 'loop over deep screen fields nDepth = nCol - 2 'current depth vData = .vntGetSymbol(sName, nDepth) 'Get Data from vb6 app If LenB(vData) > 0 And IsNumeric(vData) Then 'Check if data returned SheetRange.Cells(nRow, nCol) = vData 'Poke the data in Else SheetRange.Cells(nRow, nCol) = vData 'Poke a zero in End If Next 'nCol Next 'nRow End With Set SheetRange = Nothing Next 'j End If Set wkbInData = Nothing Set oFPW = Nothing Exit Sub . . . End Sub Any help would be appreciated.

    Read the article

  • boost multi_index_container and erase performance

    - by rjoshi
    I have a boost multi_index_container declared as below which is indexed by hash_unique id(unsigned long) and hash_non_unique transaction id(long). Insertion and retrieval of elements is fast but when I delete elements, it is much slower. I was expecting it to be constant time as key is hashed. e.g To erase elements from container for 10,000 elements, it takes around 2.53927016 seconds for 15,000 elements, it takes around 7.137100068 seconds for 20,000 elements, it takes around 21.391720757 seconds Is it something I am missing or is it expected behavior? class Session { public: Session() { //increment unique id static unsigned long counter = 0; boost::mutex::scoped_lock guard(mx); counter++; m_nId = counter; } unsigned long GetId() { return m_nId; } long GetTransactionHandle(){ return m_nTransactionHandle; } .... private: unsigned long m_nId; long m_nTransactionHandle; boost::mutext mx; .... }; typedef multi_index_container< Session*, indexed_by< hashed_unique< mem_fun<Session,unsigned long,&Session::GetId> >, hashed_non_unique< mem_fun<Session,unsigned long,&Session::GetTransactionHandle> > > //end indexed_by > SessionContainer; typedef SessionContainer::nth_index<0>::type SessionById; int main() { ... SessionContainer container; SessionById *pSessionIdView = &get<0>(container); unsigned counter = atoi(argv[1]); vector<Session*> vSes(counter); //insert for(unsigned i = 0; i < counter; i++) { Session *pSes = new Session(); container.insert(pSes); vSes.push_back(pSes); } timespec ts; lock_settime(CLOCK_PROCESS_CPUTIME_ID, &ts); //erase for(unsigned i = 0; i < counter; i++) { pSessionIdView->erase(vSes[i]->getId()); delete vSes[i]; } lock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts); std::cout << "Total time taken for erase:" << ts.tv_sec << "." << ts.tv_nsec << "\n"; return (EXIST_SUCCESS); }

    Read the article

  • DatagridView loses current edit on Background update

    - by yoni.s
    Here's my problem : I have a DataGridView bound to a BindingList of custom objects. A background thread is constantly updating a value of these objects. The udpates are showing correctly, and everything is fine except for one thing - If you try to edit a different field while the background-updated field is being updated, it loses the entered value. Here is a code sample that demonstrates this behavior: (for new form, drop a new DataGridView on:) using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Threading; namespace WindowsFormsApplication2 { public partial class Form1 : Form { private BindingList<foo> flist; private Thread thrd; private BindingSource b; public Form1() { InitializeComponent(); flist = new BindingList<foo> { new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1}, new foo(){a =1,b = 1, c=1} }; b = new BindingSource(); b.DataSource = flist; dataGridView1.DataSource = b; thrd = new Thread(new ThreadStart(updPRoc)); thrd.Start(); } private void upd() { flist.ToList().ForEach(f=>f.c++); } private void updPRoc() { while (true) { this.BeginInvoke(new MethodInvoker(upd)); Thread.Sleep(1000); } } } public class foo:INotifyPropertyChanged { private int _c; public int a { get; set; } public int b { get; set; } public int c { get {return _c;} set { _c = value; if (PropertyChanged!= null) PropertyChanged(this,new PropertyChangedEventArgs("c")); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; #endregion } } So, you edit column a or b, you will see that the column c update causes you to lose your entry. Any thoughts appreciated.

    Read the article

  • How does NameScope in WPF works ?

    - by Nicolas Dorier
    I'm having a strange behavior with NameScopes in WPF, I have created a CustomControl called FadingPopup which is a child class of Window with nothing special inside. <Window.Resources> <local:FadingPopup> <Button Name="prec" Content="ahah"></Button> <Button Content="{Binding ElementName=prec, Path=Content}"></Button> </local:FadingPopup> </Window.Resources> In this snippet, the binding doesn't work (always empty). If I move these buttons from the resources to the content of the window like this : <Window ...> <Button Name="prec" Content="ahah"></Button> <Button Content="{Binding ElementName=prec, Path=Content}"></Button> </Window> The binding works as expected. Now, I have tried a mix between these two snippets : <Window...> <Window.Resources> <local:FadingPopup> <Button Name="prec" Content="Haha"></Button> </local:FadingPopup> </Window.Resources> <Button Content="{Binding ElementName=prec, Path=Content}"></Button> </Window> It works as well. Apparently, if the button prec is in the resources it registers itself in the NameScope of the Window. BUT, it seems that the Binding tries to resolve ElementName with the NameScope of the FadingPopup (which is null), thus the binding doesn't work... My first snipped works well if I specify a NameScope in my class FadingPopup : static FadingPopup() { NameScope.NameScopeProperty.OverrideMetadata(typeof(FadingPopup), new PropertyMetadata(new NameScope())); } But I don't like this solution because I don't understand why, in the first snippet, prec is registered in the NameScope of Window, but ElementName is resolved with the NameScope of FadingGroup (which is null by default)... Does someone can explain to me what is going on ? Why my first snippet doesn't work, if I don't specify a default NameScope for FadingGroup ?

    Read the article

  • SBT run differences between scala and java?

    - by Eric Cartner
    I'm trying to follow the log4j2 configuration tutorials in a SBT 0.12.1 project. Here is my build.sbt: name := "Logging Test" version := "0.0" scalaVersion := "2.9.2" libraryDependencies ++= Seq( "org.apache.logging.log4j" % "log4j-api" % "2.0-beta3", "org.apache.logging.log4j" % "log4j-core" % "2.0-beta3" ) When I run the main() defined in src/main/scala/logtest/Foo.scala: package logtest import org.apache.logging.log4j.{Logger, LogManager} object Foo { private val logger = LogManager.getLogger(getClass()) def main(args: Array[String]) { logger.trace("Entering application.") val bar = new Bar() if (!bar.doIt()) logger.error("Didn't do it.") logger.trace("Exiting application.") } } I get the output I was expecting given that src/main/resources/log4j2.xml sets the root logging level to trace: [info] Running logtest.Foo 08:39:55.627 [run-main] TRACE logtest.Foo$ - Entering application. 08:39:55.630 [run-main] TRACE logtest.Bar - entry 08:39:55.630 [run-main] ERROR logtest.Bar - Did it again! 08:39:55.630 [run-main] TRACE logtest.Bar - exit with (false) 08:39:55.630 [run-main] ERROR logtest.Foo$ - Didn't do it. 08:39:55.630 [run-main] TRACE logtest.Foo$ - Exiting application. However, when I run the main() defined in src/main/java/logtest/LoggerTest.java: package logtest; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class LoggerTest { private static Logger logger = LogManager.getLogger(LoggerTest.class.getName()); public static void main(String[] args) { logger.trace("Entering application."); Bar bar = new Bar(); if (!bar.doIt()) logger.error("Didn't do it."); logger.trace("Exiting application."); } } I get the output: [info] Running logtest.LoggerTest ERROR StatusLogger Unable to locate a logging implementation, using SimpleLogger ERROR Bar Did it again! ERROR LoggerTest Didn't do it. From what I can tell, ERROR StatusLogger Unable to ... is usually a sign that log4j-core is not on my classpath. The lack of TRACE messages seems to indicate that my log4j2.xml settings aren't on the classpath either. Why should there be any difference in classpath if I'm running Foo.main versus LoggerTest.main? Or is there something else causing this behavior? Update I used SBT Assembly to build a fat jar of this project and specified logtest.LoggerTest to be the main class. Running it from the command line produced correct results: Eric-Cartners-iMac:target ecartner$ java -jar "Logging Test-assembly-0.0.jar" 10:52:23.220 [main] TRACE logtest.LoggerTest - Entering application. 10:52:23.221 [main] TRACE logtest.Bar - entry 10:52:23.221 [main] ERROR logtest.Bar - Did it again! 10:52:23.221 [main] TRACE logtest.Bar - exit with (false) 10:52:23.221 [main] ERROR logtest.LoggerTest - Didn't do it. 10:52:23.221 [main] TRACE logtest.LoggerTest - Exiting application.

    Read the article

  • How to properly test Hibernate length restriction?

    - by Cesar
    I have a POJO mapped with Hibernate for persistence. In my mapping I specify the following: <class name="ExpertiseArea"> <id name="id" type="string"> <generator class="assigned" /> </id> <version name="version" column="VERSION" unsaved-value="null" /> <property name="name" type="string" unique="true" not-null="true" length="100" /> ... </class> And I want to test that if I set a name longer than 100 characters, the change won't be persisted. I have a DAO where I save the entity with the following code: public T makePersistent(T entity){ transaction = getSession().beginTransaction(); transaction.begin(); try{ getSession().saveOrUpdate(entity); transaction.commit(); }catch(HibernateException e){ logger.debug(e.getMessage()); transaction.rollback(); } return entity; } Actually the code above is from a GenericDAO which all my DAOs inherit from. Then I created the following test: public void testNameLengthMustBe100orLess(){ ExpertiseArea ea = new ExpertiseArea( "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890" + "1234567890"); assertTrue("Name should be 100 characters long", ea.getName().length() == 100); ead.makePersistent(ea); List<ExpertiseArea> result = ead.findAll(); assertEquals("Size must be 1", result.size(),1); ea.setName(ea.getName()+"1234567890"); ead.makePersistent(ea); ExpertiseArea retrieved = ead.findById(ea.getId(), false); assertTrue("Both objects should be equal", retrieved.equals(ea)); assertTrue("Name should be 100 characters long", (retrieved.getName().length() == 100)); } The object is persisted ok. Then I set a name longer than 100 characters and try to save the changes, which fails: 14:12:14,608 INFO StringType:162 - could not bind value '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890' to parameter: 2; data exception: string data, right truncation 14:12:14,611 WARN JDBCExceptionReporter:100 - SQL Error: -3401, SQLState: 22001 14:12:14,611 ERROR JDBCExceptionReporter:101 - data exception: string data, right truncation 14:12:14,614 ERROR AbstractFlushingEventListener:324 - Could not synchronize database state with session org.hibernate.exception.DataException: could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] //Stack trace 14:12:14,615 DEBUG GenericDAO:87 - could not update: [com.exp.model.ExpertiseArea#33BA7E09-3A79-4C9D-888B-4263314076AF] 14:12:14,616 DEBUG JDBCTransaction:186 - rollback 14:12:14,616 DEBUG JDBCTransaction:197 - rolled back JDBC Connection That's expected behavior. However when I retrieve the persisted object to check if its name is still 100 characters long, the test fails. The way I see it, the retrieved object should have a name that is 100 characters long, given that the attempted update failed. The last assertion fails because the name is 110 characters long now, as if the ea instance was indeed updated. What am I doing wrong here?

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >