Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 269/338 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Can't touch UITextField on UIScrollView

    - by Chris B
    Hey guys, I know this has been talked about a lot. I think I've gone thru every question on this site, and still have not been able to get this working. I'm new to developing but I have a good sense of what's going on with all of my code. I definitely don't have a lot of experience though, this is my first iPhone app. I'm making a data entry field that is comprised of multiple UITextFields within a UIScrollView. I'll avoid explaining the other details for now, as it seems its a very basic problem. Without a scrollview, the textfields work perfectly. I can touch the textfield and the keyboard or picker view show up appropriately. When I add the textfields to a scrollview, the scrollview works, but then the text fields don't receive my touches. Here's the key: When 'User Interaction' is ENABLED, the scrollview works but the textfield touches are NOT registered. When 'User Interaction' is DISABLED, the scrollview doesn't work, but the textfield touches ARE registered and the keyboard/picker pops up. From the other posts I have seen people creating subclasses and overriding the touches in a separate implementation. I've seen people using custom content views (subviews?), I've seen some solutions that are now obsolete since the APIs have changed in newer versions of the SDK, and I am just completely stuck. I will leave out my code for now, because maybe there is a solution that someone has without requiring my code. If someone needs to see my code, I will put it up. My app is being written in the 3.1.3 SDK. If anyone has ANY information that could help, it would be so greatly appreciated.

    Read the article

  • Java Input/Output streams for unnamed pipes created in native code?

    - by finrod
    Is there a way to easily create Java Input/Output streams for unnamed pipes created in native code? Motivation: I need my own implementation of the Process class. The native code spawns me a new process with subprocess' IO streams redirected to unnamed pipes. Problem: The file descriptors for correct ends of those pipes make their way to Java. At this point I get stuck as I cannot create a new FileDescriptor which I could pass to FileInput/FileOutput stream. I have used reflection to get around the problem and got communication with a simple bouncer sub-process running. However I have a notion that it is not the cleanest way to go. Have you used this approach? Do you see any problems with this approach? (the platform will never change) Searching around the internets revealed similar solution using native code. Any thoughts before I dive into heavy testing of this approach are very welcome. I would like to give a shot to existing code before writing my own IO stream implementations... Thank you.

    Read the article

  • iPhone: Helpful Classes or extended Subclasses which should have been in the SDK

    - by disp
    This is more a community sharing post than a real question. In my iPhone OS projects I'm always importing a helper class with helpful methods which I can use for about every project. So I thought it might be a good idea, if everyone shares some of their favorite methods, which should have been in everyones toolcase. I'll start with an extension of the NSString class, so I can make strings with dates on the fly providing format and locale. Maybe someone can find some need in this. @implementation NSString (DateHelper) +(NSString *) stringWithDate:(NSDate*)date withFormat:(NSString *)format withLocaleIdent:(NSString*)localeString{ NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; //For example @"de-DE", or @"en-US" NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:localeString]; [dateFormatter setLocale:locale]; // For example @"HH:mm" [dateFormatter setDateFormat:format]; NSString *string = [dateFormatter stringFromDate:date]; [dateFormatter release]; [locale release]; return string; } @end I'd love to see some of your tools.

    Read the article

  • Invoking different methods on threads

    - by Kraken
    I have a main process main. It creates 10 threads (say) and then what i want to do is the following: while(required){ Thread t= new Thread(new ClassImplementingRunnable()); t.start(); counter++; } Now i have the list of these threads, and for each thread i want to do a set of process, same for all, hence i put that implementation in the run method of ClassImplementingRunnable. Now after the threads have done their execution, i wan to wait for all of them to stop, and then evoke them again, but this time i want to do them serially not in parallel. for this I join each thread, to wait for them to finish execution but after that i am not sure how to evoke them again and run that piece of code serially. Can i do something like for(each thread){ t.reevoke(); //how can i do that. t.doThis(); // Also where does `dothis()` go, given that my ClassImplementingRunnable is an inner class. } Also, i want to use the same thread, i.e. i want the to continue from where they left off, but in a serial manner. I am not sure how to go about the last piece of pseudo code. Kindly help. Working with with java.

    Read the article

  • Nested functions not allowed in drawrect problem

    - by Martin
    I have a custom view onto which I draw some graphics from the drawrect function, which works fine. However I like to draw based on the contens of an array I pass on the the view just before I do a setNeedsDisplay. In the drawRect function I try to access the array but then I get a nested functions error which I do not understand. Here's my code: // // MyView.h // window // #import <UIKit/UIKit.h> @interface MyView : UIView { NSArray * nary; } @property (nonatomic, copy) NSArray *nary; @end // // MyView.m // window // #import "MyView.h" @implementation MyView @synthesize nary; CGContextRef c; CGFloat black[4] = {0.0f, 0.0f, 0.0f, 1.0f}; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)drawRect:(CGRect)rect { c = UIGraphicsGetCurrentContext(); CGContextSetStrokeColor(c, black); NSLog(@"mview"); NSArray *ns = [nary objectAtIndex:0]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Request for advice about class design, inheritance/aggregation

    - by Lasse V. Karlsen
    I have started writing my own WebDAV server class in .NET, and the first class I'm starting with is a WebDAVListener class, modelled after how the HttpListener class works. Since I don't want to reimplement the core http protocol handling, I will use HttpListener for all its worth, and thus I have a question. What would the suggested way be to handle this: Implement all the methods and properties found inside HttpListener, just changing the class types where it matters (ie. the GetContext + EndGetContext methods would return a different class for WebDAV contexts), and storing and using a HttpListener object internally Construct WebDAVListener by passing it a HttpListener class to use? Create a wrapper for HttpListener with an interface, and constrct WebDAVListener by passing it an object implementing this interface? If going the route of passing a HttpListener (disguised or otherwise) to the WebDAVListener, would you expose the underlying listener object through a property, or would you expect the program that used the class to keep a reference to the underlying HttpListener? Also, in this case, would you expose some of the methods of HttpListener through the WebDAVListener, like Start and Stop, or would you again expect the program that used it to keep the HttpListener reference around for all those things? My initial reaction tells me that I want a combination. For one thing, I would like my WebDAVListener class to look like a complete implementation, hiding the fact that there is a HttpListener object beneath it. On the other hand, I would like to build unit-tests without actually spinning up a networked server, so some kind of mocking ability would be nice to have as well, which suggests I would like the interface-wrapper way. One way I could solve this would be this: public WebDAVListener() : WebDAVListener(new HttpListenerWrapper()) { } public WebDAVListener(IHttpListenerWrapper listener) { } And then I would implement all the methods of HttpListener (at least all those that makes sense) in my own class, by mostly just chaining the call to the underlying HttpListener object. What do you think? Final question: If I go the way of the interface, assuming the interface maps 1-to-1 onto the HttpListener class, and written just to add support for mocking, is such an interface called a wrapper or an adapter?

    Read the article

  • Streaming audio not working in Android

    - by user320293
    Hi, I'm sure that this question has been asked before but I've been unable to find a solid answer. I'm trying to load a streaming audio from a server. Its a audio/aac file http://3363.live.streamtheworld.com:80/CHUMFMAACCMP3 The code that I'm using is private void playAudio(String str) { try { final String path = str; if (path == null || path.length() == 0) { Toast.makeText(RadioPlayer.this, "File URL/path is empty", Toast.LENGTH_LONG).show(); } else { // If the path has not changed, just start the media player MediaPlayer mp = new MediaPlayer(); mp.setAudioStreamType(AudioManager.STREAM_MUSIC); try{ mp.setDataSource(getDataSource(path)); mp.prepareAsync(); mp.start(); }catch(IOException e){ Log.i("ONCREATE IOEXCEPTION", e.getMessage()); }catch(Exception e){ Log.i("ONCREATE EXCEPTION", e.getMessage()); } } } catch (Exception e) { Log.e("RPLAYER EXCEPTION", "error: " + e.getMessage(), e); } } private String getDataSource(String path) throws IOException { if (!URLUtil.isNetworkUrl(path)) { return path; } else { URL url = new URL(path); URLConnection cn = url.openConnection(); cn.connect(); InputStream stream = cn.getInputStream(); if (stream == null) throw new RuntimeException("stream is null"); File temp = File.createTempFile("mediaplayertmp", ".dat"); temp.deleteOnExit(); String tempPath = temp.getAbsolutePath(); FileOutputStream out = new FileOutputStream(temp); byte buf[] = new byte[128]; do { int numread = stream.read(buf); if (numread <= 0) break; out.write(buf, 0, numread); } while (true); try { stream.close(); } catch (IOException ex) { Log.e("RPLAYER IOEXCEPTION", "error: " + ex.getMessage(), ex); } return tempPath; } } Is this the correct implementation? I'm not sure where I'm going wrong. Can someone please please help me on this.

    Read the article

  • How can I check if a TTTabItem gets selected?

    - by schoash
    I have several TTTabGrids in my view and now I am stuck with the problem, that I can't figure out, how to detect, when a TTTabItem gets touched. If someone selects an item in one of the TTTabGrids I should upadate some labels. Can someone tell me a way how to detect, when the selection in a TTTabGrid gets changed? My code looks like this: @interface MyTTTabGrid : TTTabGrid @end @implementation MyTTTabGrid - (id)initWithFrame:(CGRect)frame columns:(NSInteger)columns{ if (self = [super initWithFrame:frame]) { self.style = TTSTYLE(tabGrid); _columnCount = columns; } return self; } @end - (void)viewDidLoad { [super viewDidLoad]; _tabBarHours = [[MyTTTabGrid alloc] initWithFrame:CGRectMake(10, _tabBarZone.bottom+10, 300, 0) columns:7]; _tabBarHours.backgroundColor = [UIColor clearColor]; NSMutableArray *tmpHours = [[NSMutableArray alloc] init]; int i=6; while (i<20) { [tmpHours addObject:[[[TTTabItem alloc] initWithTitle:[NSString stringWithFormat:@"%d", i]] autorelease]]; i++; } _tabBarHours.tabItems = tmpHours; [_tabBarHours sizeToFit]; [self.view addSubview:_tabBarHours]; [tmpHours release]; }

    Read the article

  • Behavior difference between UIView.subviews and [NSView subviews]

    - by zpasternack
    I have a piece of code in an iPhone app, which removes all subviews from a UIView subclass. It looks like this: NSArray* subViews = self.subviews; for( UIView *aView in subViews ) { [aView removeFromSuperview]; } This works fine. In fact, I never really gave it much thought until I tried nearly the same thing in a Mac OS X app (from an NSView subclass): NSArray* subViews = [self subviews]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } That totally doesn’t work. Specifically, at runtime, I get this: *** Collection <NSCFArray: 0x1005208a0> was mutated while being enumerated. I ended up doing it like so: NSArray* subViews = [[self subviews] copy]; for( NSView *aView in subViews ) { [aView removeFromSuperview]; } [subViews release]; That's fine. What’s bugging me, though, is why does it work on the iPhone? subviews is a copy property: @property(nonatomic,readonly,copy) NSArray *subviews; My first thought was, maybe @synthesize’d getters return a copy when the copy attribute is specified. The doc is clear on the semantics of copy for setters, but doesn’t appear to say either way for getters (or at least, it’s not apparent to me). And actually, doing a few tests of my own, this clearly does not seem to be the case. Which is good, I think returning a copy would be problematic, for a few reasons. So the question is: how does the above code work on the iPhone? NSView is clearly returning a pointer to the actual array of subviews, and perhaps UIView isn’t. Perhaps it’s simply an implementation detail of UIView, and I shouldn’t get worked up about it. Can anyone offer any insight?

    Read the article

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • Dictionaries with more than one key per value in Python

    - by nickname
    I am attempting to create a nice interface to access a data set where each value has several possible keys. For example, suppose that I have both a number and a name for each value in the data set. I want to be able to access each value using either the number OR the name. I have considered several possible implementations: Using two separate dictionaries, one for the data values organized by number, and one for the data values organized by name. Simply assigning two keys to the same value in a dictionary. Creating dictionaries mapping each name to the corresponding number, and vice versa Attempting to create a hash function that maps each name to a number, etc. (related to the above) Creating an object to encapsulate all three pieces of data, then using one key to map dictionary keys to the objects and simply searching the dictionary to map the other key to the object. None of these seem ideal. The first seems ugly and unmaintainable. The second also seems fragile. The third/fourth seem plausible, but seem to require either much manual specification or an overly complex implementation. Finally, the fifth loses constant-time performance for one of the lookups. In C/C++, I believe that I would use pointers to reference the same piece of data from different keys. I know that the problem is rather similar to a database lookup problem by a non-key column, however, I would like (if possible), to maintain the approximate O(1) performance of Python dictionaries. What is the most Pythonic way to achieve this data structure?

    Read the article

  • std::vector elements initializing

    - by Chameleon
    std::vector<int> v1(1000); std::vector<std::vector<int>> v2(1000); std::vector<std::vector<int>::const_iterator> v3(1000); How elements of these 3 vectors initialized? About int, I test it and I saw that all elements become 0. Is this standard? I believed that primitives remain undefined. I create a vector with 300000000 elements, give non-zero values, delete it and recreate it, to avoid OS memory clear for data safety. Elements of recreated vector were 0 too. What about iterator? Is there a initial value (0) for default constructor or initial value remains undefined? When I check this, iterators point to 0, but this can be OS When I create a special object to track constructors, I saw that for first object, vector run the default constructor and for all others it run the copy constructor. Is this standard? Is there a way to completely avoid initialization of elements? Or I must create my own vector? (Oh my God, I always say NOT ANOTHER VECTOR IMPLEMENTATION) I ask because I use ultra huge sparse matrices with parallel processing, so I cannot use push_back() and of course I don't want useless initialization, when later I will change the value.

    Read the article

  • C# Interface Method calls from a controller

    - by ArjaaAine
    I was just working on some application architecture and this may sound like a stupid question but please explain to me how the following works: Interface: public interface IMatterDAL { IEnumerable<Matter> GetMattersByCode(string input); IEnumerable<Matter> GetMattersBySearch(string input); } Class: public class MatterDAL : IMatterDAL { private readonly Database _db; public MatterDAL(Database db) { _db = db; LoadAll(); //Private Method } public virtual IEnumerable<Matter> GetMattersBySearch(string input) { //CODE return result; } public virtual IEnumerable<Matter> GetMattersByCode(string input) { //CODE return results; } Controller: public class MatterController : ApiController { private readonly IMatterDAL _publishedData; public MatterController(IMatterDAL publishedData) { _publishedData = publishedData; } [ValidateInput(false)] public JsonResult SearchByCode(string id) { var searchText = id; //better name for this var results = _publishedData.GetMattersBySearch(searchText).Select( matter => new { MatterCode = matter.Code, MatterName = matter.Name, matter.ClientCode, matter.ClientName }); return Json(results); } This works, when I call my controller method from jquery and step into it, the call to the _publishedData method, goes into the class MatterDAL. I want to know how does my controller know to go to the MatterDAL implementation of the Interface IMatterDAL. What if I have another class called MatterDAL2 which is based on the interface. How will my controller know then to call the right method? I am sorry if this is a stupid question, this is baffling me.

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • How to organize RMI Client-Server eBanking architecture

    - by xenom
    I am developing a secured eBanking service in RMI with a GUI both for Server and Client. The Server must be able to log every operations (new User, deleted User, Withdrawal, Lodgement...) The Client will do these operations. As everything is secured, the Client must at first, create an account with a name and a password in the GUI. After that, the GUI adds the User in the Bank UserList(arrayList) as a new Customer and the User can do several operations. It seems straightforward at first but I think my conception is not correct. Is it correct to send the whole Bank by RMI ? Because at first I thought Bank would be the server but I cannot find another way to do that. Currently, the Client GUI asks for a login and a password, and receives the Bank by RMI. A User is characterized by a name and a hash of the password. private String name; private byte[] passwordDigest; In fact the GUI is doing every security checking and I don't know if it's relevant. When you type login//password, it will search the login in the Bank and compare the hash of the password. In fact I have the impression that the Client knows too much information because when you have the Bank you have everything.. Does it seem correct or do I need to change my implementation ?

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • Howcome some C++ functions with unspecified linkage build with C linkage?

    - by christoffer
    This is something that makes me fairly perplexed. I have a C++ file that implements a set of functions, and a header file that defines prototypes for them. When building with Visual Studio or MingW-gcc, I get linking errors on two of the functions, and adding an 'extern "C"' qualifier resolved the error. How is this possible? Header file, "some_header.h": // Definition of struct DEMO_GLOBAL_DATA omitted DWORD WINAPI ThreadFunction(LPVOID lpData); void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen); void CheckValid(DEMO_GLOBAL_DATA *pData); int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName); void HandleEnd(DEMO_GLOBAL_DATA *pData); C++ file, "some_implementation.cpp" #include "some_header.h" DWORD WINAPI ThreadFunction(LPVOID lpData) { /* omitted */ } void WriteLogString(void *pUserData, const char *pString, unsigned long nStringLen) { /* omitted */ } void CheckValid(DEMO_GLOBAL_DATA *pData) { /* omitted */ } int HandleStart(DEMO_GLOBAL_DATA * pDAta, TCHAR * pLogFileName) { /* omitted */ } void HandleEnd(DEMO_GLOBAL_DATA *pData) { /* omitted */ } The implementations compile without warnings, but when linking with the UI code that calls these, I get a normal error LNK2001: unresolved external symbol "int __cdecl HandleStart(struct _DEMO_GLOBAL_DATA *, wchar_t *) error LNK2001: unresolved external symbol "void __cdecl CheckValid(struct _DEMO_MAIN_GLOBAL_DATA * What really confuses me, now, is that only these two functions (HandleStart and CheckValid) seems to be built with C linkage. Explicitly adding "extern 'C'" declarations for only these two resolved the linking error, and the application builds and runs. Adding "extern 'C'" on some other function, such as HandleEnd, introduces a new linking error, so that one is obviously compiled correctly. The implementation file is never modified in any of this, only the prototypes.

    Read the article

  • How to implement a single instance app manager in java (CVM PhoneME)?

    - by Marcus
    Hi, I'm working on a application manager for embeded platform based on the CVM PhoneME VM. The VM is started by a C++ app which configures the CVM and then triggers the VM itself. This C++ app is called form the command line passing the main class name and the classpath of a java application. There is a main java app (lets call it Manager) which loads the app using classloaders. I want this manager to be a single instance application so it could track all running apps. In other words: The first time I start an app (app1 for instance), the VM will launch and the Manager will load the app1. In further calls to load other apps (app2, app3 and so on), the same instance of the Manager would load those apps. The manager is working fine, except for the fact that this is not a single instance. Is it possible to do what I want? I found this: http://www.knowledgesutra.com/forums/topic/59760-how-to-implement-single-instance-application-on-java/ This is almost the same I want, except for the app loading part. However, the necessary packages are not available in the CVM implementation. Thanks very much.

    Read the article

  • Model class for NSDictionary information with Lazy Loading

    - by samfu_1
    My application utilizes approx. 50+ .plists that are used as NSDictionaries. Several of my view controllers need access to the properties of the dictionaries, so instead of writing duplicate code to retrieve the .plist, convert the values to a dictionary, etc, each time I need the info, I thought a model class to hold the data and supply information would be appropriate. The application isn't very large, but it does handle a good deal of data. I'm not as skilled in writing model classes that conform to the MVC paradigm, and I'm looking for some strategies for this implementation that also supports lazy loading.. This model class should serve to supply data to any view controller that needs it and perform operations on the data (such as adding entries to dictionaries) when requested by the controller functions currently planned: returning the count on any dictionary adding one or more dictionaries together Currently, I have this method for supporting the count lookup for any dictionary. Would this be an example of lazy loading? -(NSInteger)countForDictionary: (NSString *)nameOfDictionary { NSBundle *bundle = [NSBundle mainBundle]; NSString *plistPath = [bundle pathForResource: nameOfDictionary ofType: @"plist"]; //load plist into dictionary NSMutableDictionary *dictionary = [[NSMutableDictionary alloc] initWithContentsOfFile: plistPath]; NSInteger count = [dictionary count] [dictionary release]; [return count] }

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • How to pass data to a C++0x lambda function that will run in a different thread?

    - by Dimitri C.
    In our company we've written a library function to call a function asynchronously in a separate thread. It works using a combination of inheritance and template magic. The client code looks as follows: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, &DemoThread::SomeFunction, stringToPassByValue); Since the introduction of lambda functions I'd like to use it in combination with lambda functions. I'd like to write the following client code: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, [=]() { const std::string someCopy = stringToPassByValue; }); Now, with the Visual C++ 2010 this code doesn't work. What happens is that the stringToPassByValue is not copied. Instead the "capture by value" feature passes the data by reference. The result is that if the function is executed after stringToPassByValue has gone out of scope, the application crashes as its destructor is called already. So I wonder: is it possible to pass data to a lambda function as a copy? Note: One possible solution would be to modify our framework to pass the data in the lambda parameter declaration list, as follows: DemoThread thread; std::string stringToPassByValue = "The string to pass by value"; AsyncCall(thread, [=](const std::string stringPassedByValue) { const std::string someCopy = stringPassedByValue; } , stringToPassByValue); However, this solution is so verbose that our original function pointer solution is both shorter and easier to read. Update: The full implementation of AsyncCall is too big to post here. In short, what happens is that the AsyncCall template function instantiates a template class holding the lambda function. This class is derived from a base class that contains a virtual Execute() function, and upon an AsyncCall() call, the function call class is put on a call queue. A different thread then executes the queued calls by calling the virtual Execute() function, which is polymorphically dispatched to the template class which then executes the lambda function.

    Read the article

  • iPhone app. Creating a custom UIView that contains UITextField and UIButton.

    - by Dmitry Burchik
    Hi all. I am new to iPhone programming. And I have an issue. I need to create a custom user control that I will add to my UIScrollView dinamically. The control has an UITextField and an UIButton. See the code below: #import <UIKit/UIKit.h> @interface FieldWithValueControl : UIView { UITextField *txtTagName; UIButton *addButton; } @property (nonatomic, readonly) UITextField *txtTagName; @property (nonatomic, readonly) UIButton *addButton; @end #import "FieldWithValueControl.h" #define ITEM_SPACING 10 #define ITEM_HEIGHT 20 #define SWITCHBOX_WIDTH 100 #define SCREEN_WIDTH 320 #define ITEM_FONT_SIZE 14 #define TEXTBOX_WIDTH 150 @implementation FieldWithValueControl @synthesize txtTagName; @synthesize addButton; - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code txtTagName = [[UITextField alloc] initWithFrame:CGRectMake(0, 0, TEXTBOX_WIDTH, ITEM_HEIGHT)]; txtTagName.borderStyle = UITextBorderStyleRoundedRect; addButton = [UIButton buttonWithType:UIButtonTypeContactAdd]; [addButton setFrame:CGRectMake(ITEM_SPACING + TEXTBOX_WIDTH, 0, ITEM_HEIGHT, ITEM_HEIGHT)]; [addButton addTarget:self action:@selector(addButtonTouched:) forControlEvents:UIControlEventTouchUpInside]; [self addSubview:txtTagName]; [self addSubview:addButton]; } return self; } - (void)addButtonTouched:sender { UIButton *button = (UIButton*)sender; NSString *title = [button titleLabel].text; } - (void)drawRect:(CGRect)rect { // Drawing code } - (void)dealloc { [txtTagName release]; [addButton release]; [super dealloc]; } @end In my code I create an object of that class and add it to scrollView on form. FieldWithValueControl *newTagControl = (FieldWithValueControl*)[[FieldWithValueControl alloc] initWithFrame:CGRectMake(ITEM_SPACING, currentOffset + ITEM_SPACING, 0, 0)]; [scrollView addSubview:newTagControl]; The control looks fine, but if I click to the textbox or to the button nothing happens. Keyboard doesn't appear, the button is not clickable etc.

    Read the article

  • Optimizing this "Boundarize" method for Numerics in Ruby

    - by mstksg
    I'm extending Numerics with a method I call "Boundarize" for lack of better name; I'm sure there are actually real names for this. But its basic purpose is to reset a given point to be within a boundary. That is, "wrapping" a point around the boundary; if the area is betweeon 0 and 100, if the point goes to -1, -1.boundarize(0,100) = 99 (going one too far to the negative "wraps" the point around to one from the max). 102.boundarize(0,100) = 2 It's a very simple function to implement; when the number is below the minimum, simply add (max-min) until it's in the boundary. If the number is above the maximum, simply subtract (max-min) until it's in the boundary. One thing I also need to account for is that, there are cases where I don't want to include the minimum in the range, and cases where I don't want to include the maximum in the range. This is specified as an argument. However, I fear that my current implementation is horribly, terribly, grossly inefficient. And because every time something moves on the screen, it has to re-run this, this is one of the bottlenecks of my application. Anyone have any ideas? module Boundarizer def boundarize min=0,max=1,allow_min=true,allow_max=false raise "Improper boundaries #{min}/#{max}" if min >= max new_num = self if allow_min while new_num < min new_num += (max-min) end else while new_num <= min new_num += (max-min) end end if allow_max while new_num > max new_num -= (max-min) end else while new_num >= max new_num -= (max-min) end end return new_num end end class Numeric include Boundarizer end

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • Why does this tooltip appear *below* a transclucent form?

    - by Daniel Stutzbach
    I have an form with an Opacity less then 1.0. I have a tooltip associated with a label on the form. When I hover the mouse over the label, the tooltip shows up under the form instead of over the form. If I leave the Opacity at its default value of 1.0, the tooltip correctly appears over the form. However, my form is obviously no longer translucent. ;-) I'm testing on an XP system with .NET 3.5. If you don't see this problem on your system, let me know what operating system and version of .NET you have. I have tried manually adjusting the position of the ToolTip with SetWindowPos() and creating a ToolTip "by hand" using CreateWindowEx(), but the problem remains. This makes me suspect its a Win32 API problem, not a problem with the Windows Forms implementation that runs on top of Win32. Why does the tooltip appear under the form, and, more importantly, how can I get it to appear over the form where it should? Here is a minimal program to demonstrate the problem: using System; using System.Windows.Forms; public class Form1 : Form { private ToolTip toolTip1; private Label label1; [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } public Form1() { toolTip1 = new ToolTip(); label1 = new Label(); label1.Location = new System.Drawing.Point(105, 127); label1.Text = "Hover over me"; label1.AutoSize = true; toolTip1.SetToolTip(label1, "This is a moderately long string, " + "designed to be very long so that it will also be quite long."); ClientSize = new System.Drawing.Size(292, 268); Controls.Add(label1); Opacity = 0.8; } }

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >