Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 267/336 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • autorelease object not confirming protocol does not give any warning

    - by Sahil Wasan
    I have a class "ABC" and its method which returns non autoreleases object of that class. @interface ABC:NSObject +(ABC *)aClassMethodReturnsObjectWhichNotAutoreleased; @end @implementation ABC +(ABC *)aClassMethodReturnsObjectWhichNotAutoreleased{ ABC *a = [[ABC alloc]init]; return a; } @end If I have a protocol Foo. @Protocol Foo @required -(void)abc; @end My ABC class is "not" confirming Foo protocols. 1st call id<Foo> obj = [ABC aClassMethodReturnsObjectWhichNotAutoreleased]; //show warning It shows warning "Non Compatible pointers.." thats good.Abc did not confirm protocol Foo BUT 2nd call id<Foo> obj = [NSArray arrayWithObjects:@"abc",@"def",nil]; // It will "not" show warning as it will return autorelease object.NSArray don't confirm protocol Foo In first call compiler gives warning and in second call compiler is not giving any warning.I think that is because i am not returning autorelease object. Why is compiler not giving warning in 2nd call as NSArray is also not confirming FOO Thanks in advance

    Read the article

  • @selector and return value

    - by user320926
    The idea it's very easy, i have an http download class, this class must support the http authentication but it's basically a background thread so i would like to avoid to prompt directly to the screen, i would like to use a delegate method to require from outside of the class, like a viewController. But i don't know if is possible or if i have to use a different syntax. This class use this delegate protocol: //Updater.h @protocol Updater <NSObject> -(NSDictionary *)authRequired; @optional -(void)statusUpdate:(NSString *)newStatus; -(void)downloadProgress:(int)percentage; @end @interface Updater : NSThread { ... } This is the call to the delegate method: //Updater.m // This check always fails :( if ([self.delegate respondsToSelector:@selector(authRequired:)]) { auth = [delegate authRequired]; } This is the implementation of the delegate method //rootViewController.m -(NSDictionary *)authRequired; { // TODO: some kind of popup or modal view NSMutableDictionary *ret=[[NSMutableDictionary alloc] init]; [ret setObject:@"utente" forKey:@"user"]; [ret setObject:@"password" forKey:@"pass"]; return ret; }

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • Why does this tooltip appear *below* a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I'm testing on an XP system with .NET 3.5. If you don't see this problem on your system, let me know what operating system and version of .NET you have. I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

  • Are application servers necessary? Advantages of using one? (And other JEE questions)

    - by Mike
    Apologies for the long question.. there seems to be other similar questions on here but none really clear up my confusion. I'd be really grateful if someone could confirm or correct my understanding: Java Enterprise Edition is a set of APIs for building enterprise applications, which take away the burden of developing parts of the system that aren't actually features of the application you are trying to build (i.e. messaging, transactions etc). To do this, you can use an application server, which implements these APIs. So you could use JBoss, Glassfish, WebSphere, WebLogic etc which would provide your application with these enterprise services. However, there are many other implementations of these individual services available such as ActiveMQ for messaging, Hibernate for persistence, OpenEJB etc. You can download these implementations as Java libraries and include them in your application, and use the services they provide in a similar way to using the services provided by an application server. So if my understanding is correct, my questions are: I've read a lot of places that application servers are necessary for JEE features like EJB, but can't you just use an implementation such as OpenEJB and not need an application server at all? Are there any features that an application server provides which you cannot get from another source? Why would/wouldn't I choose an application server over a custom stack such as Tomcat, OpenEJB, ActiveMQ, and Hibernate? Is Spring a complete alternative to JEE? Does it ever require an application server or always just a servlet container? Why would someone choose Spring over JEE? Any help would be much appreciated!

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • Trying to write priority queue in Java but getting "Exception in thread "main" java.lang.ClassCastEx

    - by Dan
    For my data structure class, I am trying to write a program that simulates a car wash and I want to give fancy cars a higher priority than regular ones using a priority queue. The problem I am having has something to do with Java not being able to type cast "Object" as an "ArrayQueue" (a simple FIFO implementation). What am I doing wrong and how can I fix it? public class PriorityQueue<E> { private ArrayQueue<E>[] queues; private int highest=0; private int manyItems=0; public PriorityQueue(int h) { highest=h; queues = (ArrayQueue[]) new Object[highest+1]; <----problem is here } public void add(E item, int priority) { queues[priority].add(item); manyItems++; } public boolean isEmpty( ) { return (manyItems == 0); } public E remove() { E answer=null; int counter=0; do { if(!queues[highest-counter].isEmpty()) { answer = queues[highest-counter].remove(); counter=highest+1; } else counter++; }while(highest-counter>=0); return answer; } }

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • Streaming audio not working in Android

    - by user320293
    Hi, I'm sure that this question has been asked before but I've been unable to find a solid answer. I'm trying to load a streaming audio from a server. Its a audio/aac file http://3363.live.streamtheworld.com:80/CHUMFMAACCMP3 The code that I'm using is private void playAudio(String str) { try { final String path = str; if (path == null || path.length() == 0) { Toast.makeText(RadioPlayer.this, "File URL/path is empty", Toast.LENGTH_LONG).show(); } else { // If the path has not changed, just start the media player MediaPlayer mp = new MediaPlayer(); mp.setAudioStreamType(AudioManager.STREAM_MUSIC); try{ mp.setDataSource(getDataSource(path)); mp.prepareAsync(); mp.start(); }catch(IOException e){ Log.i("ONCREATE IOEXCEPTION", e.getMessage()); }catch(Exception e){ Log.i("ONCREATE EXCEPTION", e.getMessage()); } } } catch (Exception e) { Log.e("RPLAYER EXCEPTION", "error: " + e.getMessage(), e); } } private String getDataSource(String path) throws IOException { if (!URLUtil.isNetworkUrl(path)) { return path; } else { URL url = new URL(path); URLConnection cn = url.openConnection(); cn.connect(); InputStream stream = cn.getInputStream(); if (stream == null) throw new RuntimeException("stream is null"); File temp = File.createTempFile("mediaplayertmp", ".dat"); temp.deleteOnExit(); String tempPath = temp.getAbsolutePath(); FileOutputStream out = new FileOutputStream(temp); byte buf[] = new byte[128]; do { int numread = stream.read(buf); if (numread <= 0) break; out.write(buf, 0, numread); } while (true); try { stream.close(); } catch (IOException ex) { Log.e("RPLAYER IOEXCEPTION", "error: " + ex.getMessage(), ex); } return tempPath; } } Is this the correct implementation? I'm not sure where I'm going wrong. Can someone please please help me on this.

    Read the article

  • Code Golf: Numeric Ranges

    - by SLaks
    Mods: Can you please make this Community Wiki? Challenge Compactify a long list of numbers by replacing consecutive runs with ranges. Example Input 1, 2, 3, 4, 7, 8, 10, 12, 13, 14, 15 Output: 1 - 4, 7, 8, 10, 12 - 15 Note that ranges of two numbers should be left as is. (7, 8; not 7 - 8) Rules You can accept a list of integers (or equivalent datatype) as a method parameter, from the commandline, or from standard in. (pick whichever option results in shorter code) You can output a list of strings by printing them, or by returning either a single string or set of strings. Reference Implementation (C#) IEnumerable<string> Sample(IList<int> input) { for (int i = 0; i < input.Count; ) { var start = input[i]; int size = 1; while (++i < input.Count && input[i] == start + size) size++; if (size == 1) yield return start.ToString(); else if (size == 2) { yield return start.ToString(); yield return (start + 1).ToString(); } else if (size > 2) yield return start + " - " + (start + size - 1); } }

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • Behavior difference between UIView.subviews and [NSView subviews]

    - by zpasternack
    I have a piece of code in an iPhone app, which removes all subviews from a UIView subclass. It looks like this: NSArray* subViews = self.subviews; for( UIView *aView in subViews ) { [aView removeFromSuperview]; } This works fine. In fact, I never really gave it much thought until I tried nearly the same thing in a Mac OS X app (from an NSView subclass): NSArray* subViews = [self subviews]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } That totally doesn’t work. Specifically, at runtime, I get this: *** Collection <NSCFArray: 0x1005208a0> was mutated while being enumerated. I ended up doing it like so: NSArray* subViews = [[self subviews] copy]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } [subViews release]; That's fine. What’s bugging me, though, is why does it work on the iPhone? subviews is a copy property: @property(nonatomic,readonly,copy) NSArray *subviews; My first thought was, maybe @synthesize’d getters return a copy when the copy attribute is specified. The doc is clear on the semantics of copy for setters, but doesn’t appear to say either way for getters (or at least, it’s not apparent to me). And actually, doing a few tests of my own, this clearly does not seem to be the case. Which is good, I think returning a copy would be problematic, for a few reasons. So the question is: how does the above code work on the iPhone? NSView is clearly returning a pointer to the actual array of subviews, and perhaps UIView isn’t. Perhaps it’s simply an implementation detail of UIView, and I shouldn’t get worked up about it. Can anyone offer any insight?

    Read the article

  • How to implement a single instance app manager in java (CVM PhoneME)?

    - by Marcus
    Hi, I'm working on a application manager for embeded platform based on the CVM PhoneME VM. The VM is started by a C++ app which configures the CVM and then triggers the VM itself. This C++ app is called form the command line passing the main class name and the classpath of a java application. There is a main java app (lets call it Manager) which loads the app using classloaders. I want this manager to be a single instance application so it could track all running apps. In other words: The first time I start an app (app1 for instance), the VM will launch and the Manager will load the app1. In further calls to load other apps (app2, app3 and so on), the same instance of the Manager would load those apps. The manager is working fine, except for the fact that this is not a single instance. Is it possible to do what I want? I found this: http://www.knowledgesutra.com/forums/topic/59760-how-to-implement-single-instance-application-on-java/ This is almost the same I want, except for the app loading part. However, the necessary packages are not available in the CVM implementation. Thanks very much.

    Read the article

  • iPhone app. Creating a custom UIView that contains UITextField and UIButton.

    - by Dmitry Burchik
    Hi all. I am new to iPhone programming. And I have an issue. I need to create a custom user control that I will add to my UIScrollView dinamically. The control has an UITextField and an UIButton. See the code below: #import <UIKit/UIKit.h> @interface FieldWithValueControl : UIView { UITextField *txtTagName; UIButton *addButton; } @property (nonatomic, readonly) UITextField *txtTagName; @property (nonatomic, readonly) UIButton *addButton; @end #import "FieldWithValueControl.h" #define ITEM_SPACING 10 #define ITEM_HEIGHT 20 #define SWITCHBOX_WIDTH 100 #define SCREEN_WIDTH 320 #define ITEM_FONT_SIZE 14 #define TEXTBOX_WIDTH 150 @implementation FieldWithValueControl @synthesize txtTagName; @synthesize addButton; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code txtTagName = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, TEXTBOX_WIDTH, ITEM_HEIGHT)]; txtTagName.borderStyle = UITextBorderStyleRoundedRect; addButton = [UIButton buttonWithType:UIButtonTypeContactAdd]; [addButton setFrame:CGRectMake(ITEM_SPACING + TEXTBOX_WIDTH, 0, ITEM_HEIGHT, ITEM_HEIGHT)]; [addButton addTarget:self action:@selector(addButtonTouched:) forControlEvents:UIControlEventTouchUpInside]; [self addSubview:txtTagName]; [self addSubview:addButton]; } return self; } - (void)addButtonTouched:sender { UIButton *button = (UIButton*)sender; NSString *title = [button titleLabel].text; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [txtTagName release]; [addButton release]; [super dealloc]; } @end In my code I create an object of that class and add it to scrollView on form. FieldWithValueControl *newTagControl = (FieldWithValueControl*)[[FieldWithValueControl alloc] initWithFrame:CGRectMake(ITEM_SPACING, currentOffset + ITEM_SPACING, 0, 0)]; [scrollView addSubview:newTagControl]; The control looks fine, but if I click to the textbox or to the button nothing happens. Keyboard doesn't appear, the button is not clickable etc.

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • How can I check if a TTTabItem gets selected?

    - by schoash
    I have several TTTabGrids in my view and now I am stuck with the problem, that I can't figure out, how to detect, when a TTTabItem gets touched. If someone selects an item in one of the TTTabGrids I should upadate some labels. Can someone tell me a way how to detect, when the selection in a TTTabGrid gets changed? My code looks like this: @interface MyTTTabGrid : TTTabGrid @end @implementation MyTTTabGrid - (id)initWithFrame:(CGRect)frame columns:(NSInteger)columns{ if (self = [super initWithFrame:frame]) { self.style = TTSTYLE(tabGrid); _columnCount = columns; } return self; } @end - (void)viewDidLoad { [super viewDidLoad]; _tabBarHours = [[MyTTTabGrid alloc] initWithFrame:CGRectMake(10, _tabBarZone.bottom+10, 300, 0) columns:7]; _tabBarHours.backgroundColor = [UIColor clearColor]; NSMutableArray *tmpHours = [[NSMutableArray alloc] init]; int i=6; while (i<20) { [tmpHours addObject:[[[TTTabItem alloc] initWithTitle:[NSString stringWithFormat:@"%d", i]] autorelease]]; i++; } _tabBarHours.tabItems = tmpHours; [_tabBarHours sizeToFit]; [self.view addSubview:_tabBarHours]; [tmpHours release]; }

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • OSGI, Servlets and JPA hello world / tutorial / example

    - by Kamil
    I want to build a web application which basically is a restful web-service serving json messages. I would like it to be as simple as possible. I was thinking about using servlets (with annotations). JPA as a database layer is a must - Toplink or Hibernate. Preferably working on Tomcat. I want to have app divided into modules serving different functionality (auth service, customer service, etc..). And I would like to be able to update those modules without reinstalling whole application on the server - like eclipse plugins, user is notified (when he enters webapp's home url) that update is available, clicks it, and app is downloading and installing updated module. I think this functionality can be made with OSGI, but I can't find any example code, or tutorial with simple hello world updatable servlet providing some data from database through jpa. I'm looking for an advice: - Is OSGI the right tool for this or it can be done with something simpler? - Where can I find some examples covering topic (or topics) which I need for this project. - Which OSGI implementation would be best-simplest for this task. *My knowledge of OSGI is basic. I know how bundles are described, I understand concept of OSGI container and what it does. I have never created any OSGI app yet.

    Read the article

  • problem with kCFSocketReadCallBack

    - by zp26
    Hello. I have a problem with my program. I created a socket with "kCFSocketReadCallBack. My intention was to call the "acceptCallback" only when it receives a string to the socket. Instead my program does not just accept the connection always goes into "startReceive" stop doing so and sometimes crash the program. Can anybody help? Thanks readSocket = CFSocketCreateWithNative( NULL, fd, kCFSocketReadCallBack, AcceptCallback, &context ); static void AcceptCallback(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) // Called by CFSocket when someone connects to our listening socket. // This implementation just bounces the request up to Objective-C. { ServerVistaController * obj; #pragma unused(address) // assert(address == NULL); assert(data != NULL); obj = (ServerVistaController *) info; assert(obj != nil); #pragma unused(s) assert(s == obj->listeningSocket); if (type & kCFSocketAcceptCallBack){ [obj acceptConnection:*(int *)data]; } if (type & kCFSocketAcceptCallBack){ [obj startReceive:*(int *)data]; } } -(void)startReceive:(int)fd { CFReadStreamRef readStream = NULL; CFIndex bytes; UInt8 buffer[MAXLENGTH]; CFStreamCreatePairWithSocket( kCFAllocatorDefault, fd, &readStream, NULL); if(!readStream){ close(fd); [self updateLabel:@"No readStream"]; } CFReadStreamOpen(readStream); [self updateLabel:@"OpenStream"]; bytes = CFReadStreamRead( readStream, buffer, sizeof(buffer)); if (bytes < 0) { [self updateLabel:(NSString*)buffer]; close(fd); } CFReadStreamClose(readStream); }

    Read the article

  • Not all symbols of an DLL-exported class is exported (VS9)

    - by mandrake
    I'm building a DLL from a group of static libraries and I'm having a problem where only parts of classes are exported. What I'm doing is declaring all symbols I want to export with a preprocessor definition like: #if defined(MYPROJ_BUILD_DLL) //Build as a DLL # define MY_API __declspec(dllexport) #elif defined(MYPROJ_USE_DLL) //Use as a DLL # define MY_API __declspec(dllimport) #else //Build or use as a static lib # define MY_API #endif For example: class MY_API Foo{ ... } I then build static library with MYPROJ_BUILD_DLL & MYPROJ_USE_DLL undefined causing a static library to be built. In another build I create a DLL from these static libraries. So I define MYPROJ_BUILD_DLL causing all symbols I want to export to be attributed with __declspec(dllexport) (this is done by including all static library headers in the DLL-project source file). Ok, so now to the problem. When I use this new DLL I get unresolved externals because not all symbols of a class is exported. For example in a class like this: class MY_API Foo{ public: Foo(char const* ); int bar(); private: Foo( char const*, char const* ); }; Only Foo::Foo( char const*, char const*); and int Foo::bar(); is exported. How can that be? I can understand if the entire class was missing, due to e.g. I forgot to include the header in the DLL-build. But it's only partial missing. Also, say if Foo::Foo( char const*) was not implemented; then the DLL build would have unresolved external errors. But the build is fine (I also double checked for declarations without implementation). Note: The combined size of the static libraries I'm combining is in the region of 30MB, and the resulting DLL is 1.2MB. I'm using Visual Studio 9.0 (2008) to build everything. And Depends to check for exported symbols.

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • GWT + Seam, cannot fetch scoped beans from gwt servlet in seam resource servlet.

    - by David Göransson
    Hello all I am trying to get session and conversation scoped beans to a gwt servlet in the seam resource servlet. I have a conversation scoped bean: @Name ("viewFormCopyAction") @Scope (ScopeType.CONVERSATION) public class ViewFormCopyAction {} and a session scoped bean: @Name ("authenticator") @Scope (ScopeType.SESSION) public class AuthenticatorAction {} There is a RemoteService interface: @RemoteServiceRelativePath ("strokesService") public interface StrokesService extends RemoteService { public Position getPosition (int conversationId); } with corresponding async interface: public interface StrokesServiceAsync extends RemoteService { public void getPosition (int conversationId, AsyncCallback callback); } and implementation: @Name ("com.web.actions.forms.gwt.client.StrokesService") @Scope (ScopeType.EVENT) public class StrokesServiceImpl implements StrokesService { @In Manager manager; @Override @WebRemote public Position getPosition (int conversationId) { manager.switchConversation( "" + conversationId ); ViewFormCopyAction vfca = (ViewFormCopyAction) Component.getInstance( "viewFormCopyAction" ); AuthenticatorAction aa = (AuthenticatorAction) Component.getInstance( "authenticator" ); return null; } } The gwt page is within an IFrame in a regular seam page and the conversationId is propagted with the src attribute of the IFrame. Both bean objects end up with only null values. Can anyone see anything wrong with the code? I know that I could use strings instead of the int, but never mind that at this point.

    Read the article

  • `enable_shared_from_this` has a non-virtual destructor

    - by Shtééf
    I have a pet project with which I experiment with new features of the upcoming C++0x standard. While I have experience with C, I'm fairly new to C++. To train myself into best practices, (besides reading a lot), I have enabled some strict compiler parameters (using GCC 4.4.1): -std=c++0x -Werror -Wall -Winline -Weffc++ -pedantic-errors This has worked fine for me. Until now, I have been able to resolve all obstacles. However, I have a need for enable_shared_from_this, and this is causing me problems. I get the following warning (error, in my case) when compiling my code (probably triggered by -Weffc++): base class ‘class std::enable_shared_from_this<Package>’ has a non-virtual destructor So basically, I'm a bit bugged by this implementation of enable_shared_from_this, because: A destructor of a class that is intended for subclassing should always be virtual, IMHO. The destructor is empty, why have it at all? I can't imagine anyone would want to delete their instance by reference to enable_shared_from_this. But I'm looking for ways to deal with this, so my question is really, is there a proper way to deal with this? And: am I correct in thinking that this destructor is bogus, or is there a real purpose to it?

    Read the article

  • Java: Tracking a user login session - Session EJBs vs HTTPSession

    - by bguiz
    If I want to keep track of a conversational state with each client using my web application, which is the better alternative - a Session Bean or a HTTP Session - to use? Using HTTP Session: //request is a variable of the class javax.servlet.http.HttpServletRequest //UserState is a POJO HttpSession session = request.getSession(true); UserState state = (UserState)(session.getAttribute("UserState")); if (state == null) { //create default value .. } String uid = state.getUID(); //now do things with the user id Using Session EJB: In the implementation of ServletContextListener registered as a Web Application Listener in WEB-INF/web.xml: //UserState NOT a POJO this this time, it is //the interface of the UserStateBean Stateful Session EJB @EJB private UserState userStateBean; public void contextInitialized(ServletContextEvent sce) { ServletContext servletContext = sce.getServletContext(); servletContext.setAttribute("UserState", userStateBean); ... In a JSP: public void jspInit() { UserState state = (UserState)(getServletContext().getAttribute("UserState")); ... } Elsewhere in the body of the same JSP: String uid = state.getUID(); //now do things with the user id It seems to me that the they are almost the same, with the main difference being that the UserState instance is being transported in the HttpRequest.HttpSession in the former, and in a ServletContext in the case of the latter. Which of the two methods is more robust, and why?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >