Search Results

Search found 9353 results on 375 pages for 'implementation phase'.

Page 338/375 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Constructor versus setter injection

    - by Chris
    Hi, I'm currently designing an API where I wish to allow configuration via a variety of methods. One method is via an XML configuration schema and another method is through an API that I wish to play nicely with Spring. My XML schema parsing code was previously hidden and therefore the only concern was for it to work but now I wish to build a public API and I'm quite concerned about best-practice. It seems that many favor javabean type PoJo's with default zero parameter constructors and then setter injection. The problem I am trying to tackle is that some setter methods implementations are dependent on other setter methods being called before them in sequence. I could write anal setters that will tolerate themselves being called in many orders but that will not solve the problem of a user forgetting to set the appropriate setter and therefore the bean being in an incomplete state. The only solution I can think of is to forget about the objects being 'beans' and enforce the required parameters via constructor injection. An example of this is in the default setting of the id of a component based on the id of the parent components. My Interface public interface IMyIdentityInterface { public String getId(); /* A null value should create a unique meaningful default */ public void setId(String id); public IMyIdentityInterface getParent(); public void setParent(IMyIdentityInterface parent); } Base Implementation of interface: public abstract class MyIdentityBaseClass implements IMyIdentityInterface { private String _id; private IMyIdentityInterface _parent; public MyIdentityBaseClass () {} @Override public String getId() { return _id; } /** * If the id is null, then use the id of the parent component * appended with a lower-cased simple name of the current impl * class along with a counter suffix to enforce uniqueness */ @Override public void setId(String id) { if (id == null) { IMyIdentityInterface parent = getParent(); if (parent == null) { // this may be the top level component or it may be that // the user called setId() before setParent(..) } else { _id = Helpers.makeIdFromParent(parent,getClass()); } } else { _id = id; } } @Override public IMyIdentityInterface getParent() { return _parent; } @Override public void setParent(IMyIdentityInterface parent) { _parent = parent; } } Every component in the framework will have a parent except for the top level component. Using the setter type of injection, then the setters will have different behavior based on the order of the calling of the setters. In this case, would you agree, that a constructor taking a reference to the parent is better and dropping the parent setter method from the interface entirely? Is it considered bad practice if I wish to be able to configure these components using an IoC container? Chris

    Read the article

  • Problem with Landscape and Portrait view in a TabBar Application

    - by JoshD
    I have an TabBar app that I would like to be in landscape and portrait. The issue is when I go to tab 2 make a selection from my table then select tab 1 and rotate the device, then select tab 2 again the content does not know that the device rotated and will not display my custom orientated content correctly. I am trying to write a priovate method that tells the view what orientation it is currently in. IN viewDidLoad I am assuming it is in portrait but in shouldAutoRotate I have it looking in the private method for the correct alignment of the content. Please Help!! Here is my code: #import "DetailViewController.h" #import "ScheduleTableViewController.h" #import "BrightcoveDemoAppDelegate.h" #import "Constants.h" @implementation DetailViewController @synthesize CurrentLevel, CurrentTitle, tableDataSource,logoName,showDescription,showDescriptionInfo,showTime, showTimeInfo, tableBG; - (void)layoutSubviews { showLogo.frame = CGRectMake(40, 20, 187, 101); showDescription.frame = CGRectMake(85, 140, 330, 65); showTime.frame = CGRectMake(130, 10, 149, 119); tableBG.frame = CGRectMake(0, 0, 480, 320); } /* // The designated initializer. Override if you create the controller programmatically and want to perform customization that is not appropriate for viewDidLoad. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; self.navigationItem.title = CurrentTitle; [showDescription setEditable:NO]; //show the description showDescription.text = showDescriptionInfo; showTime.text = showTimeInfo; NSString *Path = [[NSBundle mainBundle] bundlePath]; NSString *ImagePath = [Path stringByAppendingPathComponent:logoName]; UIImage *tempImg = [[UIImage alloc] initWithContentsOfFile:ImagePath]; [showLogo setImage:tempImg]; [tempImg release]; [self masterView]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } - (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration { isLandscape = UIInterfaceOrientationIsLandscape(toInterfaceOrientation); if(isLandscape = YES){ [self layoutSubviews]; } } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [logoName release]; [showLogo release]; [showDescription release]; [showDescriptionInfo release]; [super dealloc]; } @end

    Read the article

  • Looking for Reachability (2.0) Use Case Validation

    - by user350243
    There is so much info out there on using Apple's Reachability example, and so much is conflicting. I'm trying to find out of I'm using it (Reachability 2.0) correctly below. My App use case is this: If an internet connection is available through any means (wifi, LAN, Edge, 3G, etc.) a UIButton ("See More") is visible on various views. If no connection, the button is not visible. The "See More" part is NOT critical in any way to the app, it's just an add-on feature. "See More" could be visible or not anytime during the application lifecycle as connection is established or lost. Here's how I did it - Is this correct and/or is there a better way? Any help is Greatly Appreciated! lq // AppDelegate.h #import "RootViewController.h" @class Reachability; @interface AppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; UINavigationController *navigationController; RootViewController *rootViewController; Reachability* hostReach; // NOT USED: Reachability* internetReach; // NOT USED: Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @property (nonatomic, retain) IBOutlet RootViewController *rootViewController; @end // AppDelegate.m #import "AppDelegate.h" #import "Reachability.h" #define kHostName @"www.somewebsite.com" @implementation AppDelegate @synthesize window; @synthesize navigationController; @synthesize rootViewController; - (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { NetworkStatus netStatus = [curReach currentReachabilityStatus]; BOOL connectionRequired = [curReach connectionRequired]; // Set a Reachability BOOL value flag in rootViewController // to be referenced when opening various views if ((netStatus != ReachableViaWiFi) && (netStatus != ReachableViaWWAN)) { rootViewController.bConnection = (BOOL *)0; } else { rootViewController.bConnection = (BOOL *)1; } } } - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } - (void)applicationDidFinishLaunching:(UIApplication *)application { // NOTE: #DEFINE in Reachability.h: // #define kReachabilityChangedNotification @"kNetworkReachabilityChangedNotification" [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; hostReach = [[Reachability reachabilityWithHostName: kHostName] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } - (void)dealloc { [navigationController release]; [rootViewController release]; [window release]; [super dealloc]; } @end

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • What does these FindBug messages show?

    - by Hans Klock
    Not every description from from http://findbugs.sourceforge.net/bugDescriptions.html is clear to me. Sure, I can study the implementation but if somebody is more experienced then me, some explanation and examples would be great. Do you have some examples for UI_INHERITANCE_UNSAFE_GETRESOURCE when this is getting a problem? In BX_UNBOXED_AND_COERCED_FOR_TERNARY_OPERATOR I don't see the problem either. If one type is "bigger" then the other, for example int and float, then the result is float. If its Integer and Float its the wrapper Float too. That's what I expect. Does the GC_UNRELATED_TYPES really help to find errors? Isn't it the job of the compiler to check, if--taking the given example--Foo can't go into a Collection<String>. Does HE_SIGNATURE_DECLARES_HASHING_OF_UNHASHABLE_CLASS mean something like bla(Foo f){hashtable.put(f);}, where ´Foo´ is not hashable? Does FingBugs "see" the subclasses too? NP_GUARANTEED_DEREF_ON_EXCEPTION_PATH is stronger "wrong" then NP_ALWAYS_NULL_EXCEPTION? Why two error cases and with NP_NULL_ON_SOME_PATH_EXCEPTION even one more? Sounds very similar to me. What is an example of SIO_SUPERFLUOUS_INSTANCEOF? Something like foo(String s){if (s intenceof String) .... This does a null check too, but this is not the test here... NN_NAKED_NOTIFY. I my opinion the description is not clear. A change of the state is not necessary. If I use new Object() to wait and notify on I don't change the object state. Or is state the lock-state? I don't get it. SP_SPIN_ON_FIELD. Can this really happen that a compiler will move this outside from a loop? This doesn't make sense to me because from outside a Thread can always change the values. And if the variable is volatile the JVM can't cache the value. So what's the meaning? That is the difference between STCAL_STATIC_CALENDAR_INSTANCE and STCAL_INVOKE_ON_STATIC_CALENDAR_INSTANCE or STCAL_INVOKE_ON_STATIC_DATE_FORMAT_INSTANCE/STCAL_STATIC_SIMPLE_DATE_FORMAT_INSTANCE? Why is XXXX.class in WL_USING_GETCLASS_RATHER_THAN_CLASS_LITERAL better then getClass()? A getClass() in a superclass called from the subclass will always return the Class object from the subclass which is good I think. What exactly does EQ_UNUSUAL do? It should check that the argument is of the same type of the class itself but it does't? Did you ever had problems with breaks? Is there real value with SF_SWITCH_FALLTHROUGH? Sounds to strong for me. No idea what TQ_EXPLICIT_UNKNOWN_SOURCE_VALUE_REACHES_ALWAYS_SINK and TQ_EXPLICIT_UNKNOWN_SOURCE_VALUE_REACHES_NEVER_SINK could be.

    Read the article

  • Strange behavior of move with strings

    - by Umair Ahmed
    I am testing some enhanced string related functions with which I am trying to use move as a way to copy strings around for faster, more efficient use without delving into pointers. While testing a function for making a delimited string from a TStringList, I encountered a strange issue. The compiler referenced the bytes contained through the index when it was empty and when a string was added to it through move, index referenced the characters contained. Here is a small downsized barebone code sample:- unit UI; interface uses System.SysUtils, System.Types, System.UITypes, System.Rtti, System.Classes, System.Variants, FMX.Types, FMX.Controls, FMX.Forms, FMX.Dialogs, FMX.Layouts, FMX.Memo; type TForm1 = class(TForm) Results: TMemo; procedure FormCreate(Sender: TObject); end; var Form1: TForm1; implementation {$R *.fmx} function StringListToDelimitedString ( const AStringList: TStringList; const ADelimiter: String ): String; var Str : String; Temp1 : NativeInt; Temp2 : NativeInt; DelimiterSize : Byte; begin Result := ' '; Temp1 := 0; DelimiterSize := Length ( ADelimiter ) * 2; for Str in AStringList do Temp1 := Temp1 + Length ( Str ); SetLength ( Result, Temp1 ); Temp1 := 1; for Str in AStringList do begin Temp2 := Length ( Str ) * 2; // Here Index references bytes in Result Move ( Str [1], Result [Temp1], Temp2 ); // From here the index seems to address characters instead of bytes in Result Temp1 := Temp1 + Temp2; Move ( ADelimiter [1], Result [Temp1], DelimiterSize ); Temp1 := Temp1 + DelimiterSize; end; end; procedure TForm1.FormCreate(Sender: TObject); var StrList : TStringList; Str : String; begin // Test 1 : StringListToDelimitedString StrList := TStringList.Create; Str := ''; StrList.Add ( 'Hello1' ); StrList.Add ( 'Hello2' ); StrList.Add ( 'Hello3' ); StrList.Add ( 'Hello4' ); Str := StringListToDelimitedString ( StrList, ';' ); Results.Lines.Add ( Str ); StrList.Free; end; end. Please devise a solution and if possible, some explanation. Alternatives are welcome too.

    Read the article

  • Java - Is Set.contains() broken on OpenJDK 6?

    - by Peter
    Hey, I've come across a really strange problem. I have written a simple Deck class which represents a standard 52 card deck of playing cards. The class has a method missingCards() which returns the set of all cards which have been drawn from the deck. If I try and compare two identical sets of missing cards using .equals() I'm told they are different, and if I check to see if a set contains an element that I know is there using .contains() I am returned false. Here is my test code: public void testMissingCards() { Deck deck = new Deck(true); Set<Card> drawnCards = new HashSet<Card>(); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); Set<Card> missingCards = deck.missingCards(); System.out.println(drawnCards); System.out.println(missingCards); Card c1 = null; for (Card c : drawnCards){ c1 = c; } System.out.println("C1 is "+c1); for (Card c : missingCards){ System.out.println("C is "+c); System.out.println("Does c1.equal(c) "+c1.equals(c)); System.out.println("Does c.equal(c1) "+c.equals(c1)); } System.out.println("Is c1 in missingCards "+missingCards.contains(c1)); assertEquals("Deck confirm missing cards",drawnCards,missingCards); } (Edit: Just for clarity I added the two loops after I noticed the test failing. The first loop pulls out a card from drawnCards and then this card is checked against every card in missingCards - it always matches one, so that card must be contained in missingCards. However, missingCards.contains() fails) And here is an example of it's output: [5C, 2C, 2H] [2C, 5C, 2H] C1 is 2H C is 2C Does c1.equal(c) false Does c.equal(c1) false C is 5C Does c1.equal(c) false Does c.equal(c1) false C is 2H Does c1.equal(c) true Does c.equal(c1) true Is c1 in missingCards false I am completely sure that the implementation of .equals on my card class is correct and, as you can see from the output it does work! What is going on here? Cheers, Pete

    Read the article

  • How do you prevent IDisposable from spreading to all your classes?

    - by GrahamS
    Start with these simple classes... Let's say I have a simple set of classes like this: class Bus { Driver busDriver = new Driver(); } class Driver { Shoe[] shoes = { new Shoe(), new Shoe() }; } class Shoe { Shoelace lace = new Shoelace(); } class Shoelace { bool tied = false; } A Bus has a Driver, the Driver has two Shoes, each Shoe has a Shoelace. All very silly. Add an IDisposable object to Shoelace Later I decide that some operation on the Shoelace could be multi-threaded, so I add an EventWaitHandle for the threads to communicate with. So Shoelace now looks like this: class Shoelace { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; // ... other stuff .. } Implement IDisposable on Shoelace Buit now FxCop will complain: "Implement IDisposable on 'Shoelace' because it creates members of the following IDisposable types: 'EventWaitHandle'." Okay, I implement IDisposable on Shoelace and my neat little class becomes this horrible mess: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; private bool disposed = false; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } ~Shoelace() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } } // No unmanaged resources to release otherwise they'd go here. } disposed = true; } } Or (as pointed out by commenters) since Shoelace itself has no unmanaged resources, I might use the simpler dispose implementation without needing the Dispose(bool) and Destructor: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; public void Dispose() { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } GC.SuppressFinalize(this); } } Watch in horror as IDisposable spreads Right that's that fixed. But now FxCop will complain that Shoe creates a Shoelace, so Shoe must be IDisposable too. And Driver creates Shoe so Driver must be IDisposable. and Bus creates Driver so Bus must be IDisposable and so on. Suddenly my small change to Shoelace is causing me a lot of work and my boss is wondering why I need to checkout Bus to make a change to Shoelace. The Question How do you prevent this spread of IDisposable, but still ensure that your unmanaged objects are properly disposed?

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • Switch case assembly level code

    - by puffadder
    Hi All, I am programming C on cygwin windows. After having done a bit of C programming and getting comfortable with the language, I wanted to look under the hood and see what the compiler is doing for the code that I write. So I wrote down a code block containing switch case statements and converted them into assembly using: gcc -S foo.c Here is the C source: switch(i) { case 1: { printf("Case 1\n"); break; } case 2: { printf("Case 2\n"); break; } case 3: { printf("Case 3\n"); break; } case 4: { printf("Case 4\n"); break; } case 5: { printf("Case 5\n"); break; } case 6: { printf("Case 6\n"); break; } case 7: { printf("Case 7\n"); break; } case 8: { printf("Case 8\n"); break; } case 9: { printf("Case 9\n"); break; } case 10: { printf("Case 10\n"); break; } default: { printf("Nothing\n"); break; } } Now the resultant assembly for the same is: movl $5, -4(%ebp) cmpl $10, -4(%ebp) ja L13 movl -4(%ebp), %eax sall $2, %eax movl L14(%eax), %eax jmp *%eax .section .rdata,"dr" .align 4 L14: .long L13 .long L3 .long L4 .long L5 .long L6 .long L7 .long L8 .long L9 .long L10 .long L11 .long L12 .text L3: movl $LC0, (%esp) call _printf jmp L2 L4: movl $LC1, (%esp) call _printf jmp L2 L5: movl $LC2, (%esp) call _printf jmp L2 L6: movl $LC3, (%esp) call _printf jmp L2 L7: movl $LC4, (%esp) call _printf jmp L2 L8: movl $LC5, (%esp) call _printf jmp L2 L9: movl $LC6, (%esp) call _printf jmp L2 L10: movl $LC7, (%esp) call _printf jmp L2 L11: movl $LC8, (%esp) call _printf jmp L2 L12: movl $LC9, (%esp) call _printf jmp L2 L13: movl $LC10, (%esp) call _printf L2: Now, in the assembly, the code is first checking the last case (i.e. case 10) first. This is very strange. And then it is copying 'i' into 'eax' and doing things that are beyond me. I have heard that the compiler implements some jump table for switch..case. Is it what this code is doing? Or what is it doing and why? Because in case of less number of cases, the code is pretty similar to that generated for if...else ladder, but when number of cases increases, this unusual-looking implementation is seen. Thanks in advance.

    Read the article

  • JavaFX - question regarding binding button's disabled state

    - by jamiebarrow
    I'm trying to create a dummy application that maintains a list of tasks. For now, all I'm trying to do is add to the list. I enter a task name in a text box, click on the add task button, and expect the list to be updated with the new item and the task name input to be cleared. I only want to be able to add tasks if the task name is not empty. The below code is my implementation, but I have a question regarding the binding. I'm binding the textbox's text variable to a string in my view model, and the button's disable variable to a boolean in my view model. I have a trigger to update the disabled state when the task name changes. When the binding of the task name happens the boolean is updated accordingly, but the button still appears disabled. But then when I mouse over the button, it becomes enabled. I believe this is due to JavaFX 1.3's binding being lazy - only updates the bound variable when it is read. Also, when I've added the task, I clear the task name in the model, but the textbox's text doesn't change - even though I'm using bind with inverse. Is there a way to make the textbox's text and the button's disabled state update automatically via the binding as I was expecting? Thanks, James AddTaskViewModel.fx: package jamiebarrow; import java.lang.System; public class AddTaskViewModel { function logChange(prop:String,oldValue,newValue):Void { println("{System.currentTimeMillis()} : {prop} [{oldValue}] to [{newValue}] "); } public var newTaskName: String on replace old { logChange("newTaskName",old,newTaskName); isAddTaskDisabled = (newTaskName == null or newTaskName.trim().length() == 0); }; public var isAddTaskDisabled: Boolean on replace old { logChange("isAddTaskDisabled",old,isAddTaskDisabled); }; public var taskItems = [] on replace old { logChange("taskItems",old,taskItems); }; public function addTask() { insert newTaskName into taskItems; newTaskName = ""; } } Main.fx: package jamiebarrow; import javafx.scene.control.Button; import javafx.scene.control.TextBox; import javafx.scene.control.ListView; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.stage.Stage; import javafx.scene.layout.HBox; def viewModel = AddTaskViewModel{}; var txtName: TextBox = TextBox { text: bind viewModel.newTaskName with inverse onKeyTyped: onKeyTyped }; function onKeyTyped(event): Void { txtName.commit(); // ensures model is updated cmdAddTask.disable = viewModel.isAddTaskDisabled;// the binding only occurs lazily, so this is needed } var cmdAddTask = Button { text: "Add" disable: bind viewModel.isAddTaskDisabled with inverse action: onAddTask }; function onAddTask(): Void { viewModel.addTask(); } var lstTasks = ListView { items: bind viewModel.taskItems with inverse }; Stage { scene: Scene { content: [ VBox { content: [ HBox { content: [ txtName, cmdAddTask ] }, lstTasks ] } ] } }

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

  • Code to show UIPickerview under clicked UITextField

    - by Chris F
    I thought I'd share a code snippet where I show a UIPickerView when you click a UITextField. The code uses a UIPickerView, but there's no reason to use a different view controller, like a UITableViewController that uses a table instead of a picker. Just create a single-view project with a nib, and add a UITextField to the view and make you connections in IB. // .h file #import @interface MyPickerViewViewController : UIViewController <UIPickerViewDelegate, UIPickerViewDataSource, UITextFieldDelegate> - (IBAction)dismissPickerView:(id)sender; @end // .m file #import "MyPickerViewViewController.h" @interface MyPickerViewViewController () { UIPickerView *_pv; NSArray *_array; IBOutlet __weak UITextField *_tf; BOOL _pickerViewShown; } @end @implementation MyPickerViewViewController - (void)viewDidLoad { [super viewDidLoad]; _pickerViewShown = NO; _array = [NSArray arrayWithObjects:@"One", @"Two", @"Three", @"Four", nil]; _pv = [[UIPickerView alloc] initWithFrame:CGRectZero]; _pv.showsSelectionIndicator = YES; _pv.dataSource = self; _pv.delegate = self; _tf.delegate = self; _tf.inputView = _pv; } - (IBAction)dismissPickerView:(id)sender { [_pv removeFromSuperview]; [_tf.inputView removeFromSuperview]; [_tf resignFirstResponder]; _pickerViewShown = NO; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (BOOL)textFieldShouldBeginEditing:(UITextField *)textField { if (!_pickerViewShown) { [self setRectForPickerViewRelativeToTextField:textField]; [self.view addSubview:_tf.inputView]; _pickerViewShown = YES; } else { [self dismissPickerView:self]; } return NO; } - (void)setRectForPickerViewRelativeToTextField:(UITextField*)textField { CGFloat xPos = textField.frame.origin.x; CGFloat yPos = textField.frame.origin.y; CGFloat width = textField.frame.size.width; CGFloat height = textField.frame.size.height; CGFloat pvHeight = _pv.frame.size.height; CGRect pvRect = CGRectMake(xPos, yPos+height, width, pvHeight); _pv.frame = pvRect; } - (NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { return [_array objectAtIndex:row]; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { return 1; } - (NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { return _array.count; } - (void) pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { _tf.text = [_array objectAtIndex:row]; [self dismissPickerView:self]; } @end

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

  • Public class: The best way to store and access NSMutableDictionary?

    - by meridimus
    I have a class to help me store persistent data across sessions. The problem is I want to store a running sample of the property list or "plist" file in an NSMutableArray throughout the instance of the Persistance class so I can read and edit the values and write them back when I need to. The problem is, as the methods are publicly defined I cannot seem to access the declared NSMutableDictionary without errors. The particular error I get on compilation is: warning: 'Persistence' may not respond to '+saveData' So it kind of renders my entire process unusable until I work out this problem. Here is my full persistence class (please note, it's unfinished so it's just to show this problem): Persistence.h #import <UIKit/UIKit.h> #define kSaveFilename @"saveData.plist" @interface Persistence : NSObject { NSMutableDictionary *saveData; } @property (nonatomic, retain) NSMutableDictionary *saveData; + (NSString *)dataFilePath; + (NSDictionary *)getSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level; + (void)writeSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level withData:(NSDictionary *)saveData; + (NSString *)makeCampaign:(NSUInteger)campaign andLevelKey:(NSUInteger)level; @end Persistence.m #import "Persistence.h" @implementation Persistence @synthesize saveData; + (NSString *)dataFilePath { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kSaveFilename]; } + (NSDictionary *)getSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level { NSString *filePath = [self dataFilePath]; if([[NSFileManager defaultManager] fileExistsAtPath:filePath]) { NSLog(@"File found"); [[self saveData] setDictionary:[[NSDictionary alloc] initWithContentsOfFile:filePath]]; // This is where the warning "warning: 'Persistence' may not respond to '+saveData'" occurs NSString *campaignAndLevelKey = [self makeCampaign:campaign andLevelKey:level]; NSDictionary *campaignAndLevelData = [[self saveData] objectForKey:campaignAndLevelKey]; return campaignAndLevelData; } else { return nil; } } + (void)writeSaveWithCampaign:(NSUInteger)campaign andLevel:(NSUInteger)level withData:(NSDictionary *)saveData { NSString *campaignAndLevelKey = [self makeCampaign:campaign andLevelKey:level]; NSDictionary *saveDataWithKey = [[NSDictionary alloc] initWithObjectsAndKeys:saveData, campaignAndLevelKey, nil]; //[campaignAndLevelKey release]; [saveDataWithKey writeToFile:[self dataFilePath] atomically:YES]; } + (NSString *)makeCampaign:(NSUInteger)campaign andLevelKey:(NSUInteger)level { return [[NSString stringWithFormat:@"%d - ", campaign+1] stringByAppendingString:[NSString stringWithFormat:@"%d", level+1]]; } @end I call this class like any other, by including the header file in my desired location: @import "Persistence.h" Then I call the function itself like so: NSDictionary *tempSaveData = [[NSDictionary alloc] [Persistence getSaveWithCampaign:currentCampaign andLevel:currentLevel]];

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • Bad_alloc exception when using new for a struct c++

    - by bsg
    Hi, I am writing a query processor which allocates large amounts of memory and tries to find matching documents. Whenever I find a match, I create a structure to hold two variables describing the document and add it to a priority queue. Since there is no way of knowing how many times I will do this, I tried creating my structs dynamically using new. When I pop a struct off the priority queue, the queue (STL priority queue implementation) is supposed to call the object's destructor. My struct code has no destructor, so I assume a default destructor is called in that case. However, the very first time that I try to create a DOC struct, I get the following error: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012f5dc.. I don't understand what's happening - have I used up so much memory that the heap is full? It doesn't seem likely. And it's not as if I've even used that pointer before. So: first of all, what am I doing that's causing the error, and secondly, will the following code work more than once? Do I need to have a separate pointer for each struct created, or can I re-use the same temporary pointer and assume that the queue will keep a pointer to each struct? Here is my code: struct DOC{ int docid; double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; //a lot of processing goes on here; when a matching document is found, I do this: rank = calculateRanking(table, num); //if the heap is not full, create a DOC struct with the docid and rank and add it to the heap if(q.size() < 20) { doc = new DOC(num, rank); q.push(*doc); doc = NULL; } //if the heap is full, but the new rank is greater than the //smallest element in the min heap, remove the current smallest element //and add the new one to the heap else if(rank > q.top().rank) { q.pop(); cout << "pushing doc on to queue" << endl; doc = new DOC(num, rank); q.push(*doc); } Thank you very much, bsg.

    Read the article

  • C++ Virtual Constructor, without clone()

    - by Julien L.
    I want to perform "deep copies" of an STL container of pointers to polymorphic classes. I know about the Prototype design pattern, implemented by means of the Virtual Ctor Idiom, as explained in the C++ FAQ Lite, Item 20.8. It is simple and straightforward: struct ABC // Abstract Base Class { virtual ~ABC() {} virtual ABC * clone() = 0; }; struct D1 : public ABC { virtual D1 * clone() { return new D1( *this ); } // Covariant Return Type }; A deep copy is then: for( i = 0; i < oldVector.size(); ++i ) newVector.push_back( oldVector[i]->clone() ); Drawbacks As Andrei Alexandrescu states it: The clone() implementation must follow the same pattern in all derived classes; in spite of its repetitive structure, there is no reasonable way to automate defining the clone() member function (beyond macros, that is). Moreover, clients of ABC can possibly do something bad. (I mean, nothing prevents clients to do something bad, so, it will happen.) Better design? My question is: is there another way to make an abstract base class clonable without requiring derived classes to write clone-related code? (Helper class? Templates?) Following is my context. Hopefully, it will help understanding my question. I am designing a class hierarchy to perform operations on a class Image: struct ImgOp { virtual ~ImgOp() {} bool run( Image & ) = 0; }; Image operations are user-defined: clients of the class hierarchy will implement their own classes derived from ImgOp: struct CheckImageSize : public ImgOp { std::size_t w, h; bool run( Image &i ) { return w==i.width() && h==i.height(); } }; struct CheckImageResolution; struct RotateImage; ... Multiple operations can be performed sequentially on an image: bool do_operations( std::vector< ImgOp* > v, Image &i ) { std::for_each( v.begin(), v.end(), /* bind2nd(mem_fun(&ImgOp::run), i ...) don't remember syntax */ ); } int main( ... ) { std::vector< ImgOp* > v; v.push_back( new CheckImageSize ); v.push_back( new CheckImageResolution ); v.push_back( new RotateImage ); Image i; do_operations( v, i ); } If there are multiple images, the set can be split and shared over several threads. To ensure "thread-safety", each thread must have its own copy of all operation objects contained in v -- v becomes a prototype to be deep copied in each thread.

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Virtual member call in a constructor when assigning value to property

    - by comecme
    I have an Abstract class and a Derived class. The abstract class defines an abstract property named Message. In the derived class, the property is implemented by overriding the abstract property. The constructor of the derived class takes a string argument and assigns it to its Message property. In Resharper, this assignment leads to a warning "Virtual member call in constructor". The AbstractClass has this definition: public abstract class AbstractClass { public abstract string Message { get; set; } protected AbstractClass() { } public abstract void PrintMessage(); } And the DerivedClass is as follows: using System; public class DerivedClass : AbstractClass { private string _message; public override string Message { get { return _message; } set { _message = value; } } public DerivedClass(string message) { Message = message; // Warning: Virtual member call in a constructor } public DerivedClass() : this("Default DerivedClass message") {} public override void PrintMessage() { Console.WriteLine("DerivedClass PrintMessage(): " + Message); } } I did find some other questions about this warning, but in those situations there is an actual call to a method. For instance, in this question, the answer by Matt Howels contains some sample code. I'll repeat it here for easy reference. class Parent { public Parent() { DoSomething(); } protected virtual void DoSomething() {}; } class Child : Parent { private string foo; public Child() { foo = "HELLO"; } protected override void DoSomething() { Console.WriteLine(foo.ToLower()); } } Matt doesn't describe on what error the warning would appear, but I'm assuming it will be on the call to DoSomething in the Parent constructor. In this example, I understand what is meant by a virtual member being called. The member call occurs in the base class, in which only a virtual method exists. In my situation however, I don't see why assigning a value to Message would be calling a virtual member. Both the call to and the implementation of the Message property are defined in the derived class. Although I can get rid of the error by making my Derived Class sealed, I would like to understand why this situation is resulting in the warning.

    Read the article

  • WCF Service with callbacks coming from background thread?

    - by Mark Struzinski
    Here is my situation. I have written a WCF service which calls into one of our vendor's code bases to perform operations, such as Login, Logout, etc. A requirement of this operation is that we have a background thread to receive events as a result of that action. For example, the Login action is sent on the main thread. Then, several events are received back from the vendor service as a result of the login. There can be 1, 2, or several events received. The background thread, which runs on a timer, receives these events and fires an event in the wcf service to notify that a new event has arrived. I have implemented the WCF service in Duplex mode, and planned to use callbacks to notify the UI that events have arrived. Here is my question: How do I send new events from the background thread to the thread which is executing the service? Right now, when I call OperationContext.Current.GetCallbackChannel<IMyCallback>(), the OperationContext is null. Is there a standard pattern to get around this? I am using PerSession as my SessionMode on the ServiceContract. UPDATE: I thought I'd make my exact scenario clearer by demonstrating how I'm receiving events from the vendor code. My library receives each event, determines what the event is, and fires off an event for that particular occurrence. I have another project which is a class library specifically for connecting to the vendor service. I'll post the entire implementation of the service to give a clearer picture: [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerSession )] public class VendorServer:IVendorServer { private IVendorService _vendorService; // This is the reference to my class library public VendorServer() { _vendorServer = new VendorServer(); _vendorServer.AgentManager.AgentLoggedIn += AgentManager_AgentLoggedIn; // This is the eventhandler for the event which arrives from a background thread } public void Login(string userName, string password, string stationId) { _vendorService.Login(userName, password, stationId); // This is a direct call from the main thread to the vendor service to log in } private void AgentManager_AgentLoggedIn(object sender, EventArgs e) { var agentEvent = new AgentEvent { AgentEventType = AgentEventType.Login, EventArgs = e }; } } The AgentEvent object contains the callback as one of its properties, and I was thinking I'd perform the callback like this: agentEvent.Callback = OperationContext.Current.GetCallbackChannel<ICallback>(); How would I pass the OperationContext.Current instance from the main thread into the background thread?

    Read the article

  • Dijkstra's Algorithm explanation java

    - by alchemey89
    Hi, I have found an implementation for dijkstras algorithm on the internet and was wondering if someone could help me understand how the code works. Many thanks private int nr_points=0; private int[][]Cost; private int []mask; private void dijkstraTSP() { if(nr_points==0)return; //algorithm=new String("Dijkstra"); nod1=new Vector(); nod2=new Vector(); weight=new Vector(); mask=new int[nr_points]; //initialise mask with zeros (mask[x]=1 means the vertex is marked as used) for(int i=0;i<nr_points;i++)mask[i]=0; //Dijkstra: int []dd=new int[nr_points]; int []pre=new int[nr_points]; int []path=new int[nr_points+1]; int init_vert=0,pos_in_path=0,new_vert=0; //initialise the vectors for(int i=0;i<nr_points;i++) { dd[i]=Cost[init_vert][i]; pre[i]=init_vert; path[i]=-1; } pre[init_vert]=0; path[0]=init_vert; pos_in_path++; mask[init_vert]=1; for(int k=0;k<nr_points-1;k++) { //find min. cost in dd for(int j=0;j<nr_points;j++) if(dd[j]!=0 && mask[j]==0){new_vert=j; break;} for(int j=0;j<nr_points;j++) if(dd[j]<dd[new_vert] && mask[j]==0 && dd[j]!=0)new_vert=j; mask[new_vert]=1; path[pos_in_path]=new_vert; pos_in_path++; for(int j=0;j<nr_points;j++) { if(mask[j]==0) { if(dd[j]>dd[new_vert]+Cost[new_vert][j]) { dd[j]=dd[new_vert]+Cost[new_vert][j]; } } } } //Close the cycle path[nr_points]=init_vert; //Save the solution in 3 vectors (for graphical purposes) for(int i=0;i<nr_points;i++) { nod1.addElement(path[i]); nod2.addElement(path[i+1]); weight.addElement(Cost[path[i]][path[i+1]]); } }

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >