Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 237/336 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • bug with varargs and overloading?

    - by pstanton
    There seems to be a bug in the Java varargs implementation. Java can't distinguish the appropriate type when a method is overloaded with different types of vararg parameters. It gives me an error The method ... is ambiguous for the type ... Consider the following code: public class Test { public static void main(String[] args) throws Throwable { doit(new int[]{1, 2}); // <- no problem doit(new double[]{1.2, 2.2}); // <- no problem doit(1.2f, 2.2f); // <- no problem doit(1.2d, 2.2d); // <- no problem doit(1, 2); // <- The method doit(double[]) is ambiguous for the type Test } public static void doit(double... ds) { System.out.println("doubles"); } public static void doit(int... is) { System.out.println("ints"); } } the docs say: "Generally speaking, you should not overload a varargs method, or it will be difficult for programmers to figure out which overloading gets called." however they don't mention this error, and it's not the programmers that are finding it difficult, it's the compiler. thoughts?

    Read the article

  • array retain question

    - by Cosizzle
    Hello, im fairly new to objective-c, most of it is clear however when it comes to memory managment I fall a little short. Currently what my application does is during a NSURLConnection when the method -(void)connectionDidFinishLoading:(NSURLConnection *)connection is called upon I enter a method to parse some data, put it into an array, and return that array. However I'm not sure if this is the best way to do so since I don't release the array from memory within the custom method (method1, see the attached code) Below is a small script to better show what im doing .h file #import <UIKit/UIKit.h> @interface memoryRetainTestViewController : UIViewController { NSArray *mainArray; } @property (nonatomic, retain) NSArray *mainArray; @end .m file #import "memoryRetainTestViewController.h" @implementation memoryRetainTestViewController @synthesize mainArray; // this would be the parsing method -(NSArray*)method1 { // ???: by not release this, is that bad. Or does it get released with mainArray NSArray *newArray = [[NSArray alloc] init]; newArray = [NSArray arrayWithObjects:@"apple",@"orange", @"grapes", "peach", nil]; return newArray; } // this method is actually // -(void)connectionDidFinishLoading:(NSURLConnection *)connection -(void)method2 { mainArray = [self method1]; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { mainArray = nil; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [mainArray release]; [super dealloc]; } @end

    Read the article

  • Struggling with currency in Cocoa.

    - by Meltemi
    I'm trying to do something I'd think would be fairly simple: Let a user input a dollar amount, store that amount in an NSNumber (NSDecimalNumber?), then display that amount formatted as currency again at some later time. My trouble is not so much with the setNumberStyle:NSNumberFormatterCurrencyStyle and displaying floats as currency. The trouble is more with how said numberFormatter works with this UITextField. I can find few examples. This thread from November and this one give me some ideas but leaves me with more questions. I am using the UIKeyboardTypeNumberPad keyboard and understand that I should probably show $0.00 (or whatever local currency format is) in the field upon display then as a user enters numerals to shift the decimal place along: Begin with display $0.00 Tap 2 key: display $0.02 Tap 5 key: display $0.25 Tap 4 key: display $2.54 Tap 3 key: display $25.43 Then [numberFormatter numberFromString:textField.text] should give me a value I can store in my NSNumber variable. Sadly I'm still struggling: Is this really the best/easiest way? If so then maybe someone can help me with the implementation? I feel UITextField may need a delegate responding to every keypress but not sure what, where and how to implement it?! Any sample code? I'd greatly appreciate it! I've searched high and low... Edit1: So I'm looking into NSFormatter's stringForObjectValue: and the closest thing I can find to what benzado recommends: UITextViewTextDidChangeNotification. Having really tough time finding sample code on either of them...so let me know if you know where to look?

    Read the article

  • Cannot find interface declaration for 'UIResponder'

    - by lfe-eo
    Hi all, I am running into this issue when trying to compile code for an iPhone app that I inherited from a previous developer. I've poked around on a couple forums and it seems like the culprit may be a circular #import somewhere. First - Is there any easy way to find if this is the case/find what files the loop is in? Second - Its definitely possible this isn't the problem. This is the full error (truncated file paths so its easier to view here): In file included from [...]/Frameworks/UIKit.framework/Headers/UIView.h:9, from [...]/Frameworks/UIKit.framework/Headers/UIActivityIndicatorView.h:8, from [...]/Frameworks/UIKit.framework/Headers/UIKit.h:11, from /Users/wbs/Documents/EINetIPhone/EINetIPhone_Prefix.pch:13: [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:15: error: expected ')' before 'UIResponder' [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:17: error: expected '{' before '-' token [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:42: warning: '@end' must appear in an @implementation context [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:51: error: expected ':' before ';' token [...]/Frameworks/UIKit.framework/Headers/UIResponder.h:58: error: cannot find interface declaration for 'UIResponder' As you can see, there are other errors alongside this one. These seem to be simple syntax errors, however, they appear in one of Apple's UIKit files (not my own) so I am seriously doubting that Apple's code is truly producing these errors. I'm stumped as to how to solve this issue. If anyone has any ideas of things I could try or ways/places I could get more info on the problem, I'd really appreciate it. I'm very new to Obj-C and iPhone coding.

    Read the article

  • WCF and Unity - Dependecy Injection

    - by Michael
    I'm trying to hock up WCF with dependecy injection. All the examples that I have found is based on the assumptions that you either uses a .svc (ServiceHostFactory) service or uses app.config to configure the container. Other examples is also based on that the container is passed around to the classes. I would like a solution where the container is not passed around (not tightly coupled to Unity). Where I don't uses a config file to configure the container and where I use self-hosted services. The problem is - as I see it - that the ServiceHost is taking the type of the service implementation as a parameter so what different does it do to use the InstanceProvider? The solution I have come up with at the moment is to register the ServiceHost (or a specialization) an register a Type with a name ( e.g. container.RegisterInstance<Type>("ServiceName", typeof(Service);). And then container.RegisterType<UnityServiceHost>(new InjectionConstructor(new ResolvedParameter<Type>("ServiceName"))); to register the ServiceHost. Any better solutions out there? I'm I perhaps way of in my assumptions. Best regards, Michael

    Read the article

  • iPhone: CATiledLayer/UIScrollView wont scroll after zooming and only zooms to anchor point

    - by Brodie4598
    Here is the problem... I am using CA Tiled Layer to display a large jpg. The view loads okay, and when I go to scroll around, it works fine. However, as soon as I zoom in or out once, it scrolls to the top left (to the anchor point) and will not scroll at all. The zooming works fine, but I just cannot scroll. Here is my code: #import <QuartzCore/QuartzCore.h> #import "PracticeViewController.h" @implementation practiceViewController //@synthesize image; - (void)viewDidLoad { NSString *path = [[NSBundle mainBundle] pathForResource:@"H-5" ofType:@"jpg"]; NSData *data = [NSData dataWithContentsOfFile:path]; image = [UIImage imageWithData:data]; CGRect pageRect = CGRectMake(0, 0, image.size.width, image.size.height); CATiledLayer *tiledLayer = [CATiledLayer layer]; tiledLayer.anchorPoint = CGPointMake(0.0f, 1.0f); tiledLayer.delegate = self; tiledLayer.tileSize = CGSizeMake(1000, 1000); tiledLayer.levelsOfDetail = 6; tiledLayer.levelsOfDetailBias = 0; tiledLayer.bounds = pageRect; tiledLayer.transform = CATransform3DMakeScale(1.0f, -1.0f, 0.3f); myContentView = [[UIView alloc] initWithFrame:self.view.bounds]; [myContentView.layer addSublayer:tiledLayer]; UIScrollView *scrollView = [[UIScrollView alloc] initWithFrame:self.view.bounds]; scrollView.delegate = self; scrollView.contentSize = pageRect.size; scrollView.minimumZoomScale = .2; scrollView.maximumZoomScale = 1; [scrollView addSubview:myContentView]; [self.view addSubview:scrollView]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView { return myContentView; } - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx { NSString *path = [[NSBundle mainBundle] pathForResource:@"H-5" ofType:@"jpg"]; NSData *data = [NSData dataWithContentsOfFile:path]; image = [UIImage imageWithData:data]; CGRect imageRect = CGRectMake (0.0, 0.0, image.size.width, image.size.height); CGContextDrawImage (ctx, imageRect, [image CGImage]); } @end

    Read the article

  • drawing different quartz 2d

    - by Cosizzle
    Hello, I'm trying to make a small application or demo thats a tab bar application. Each bar item loads in a new view. I've created a quartzView class that is called on by the controller: - (void)viewDidLoad { quartzView *drawingView = [[quartzView alloc] initWithFrame:CGRectMake(0,0,320,480)]; [self.view addSubview:drawingView]; [super viewDidLoad]; } From my understanding, the drawRect method must be triggered in order to render the object and to draw it. This is my quartzView class: #import "quartzView.h" @implementation quartzView #pragma mark shapes // Blue Circle - (void)drawRect:(CGRect)rect { NSLog(@"Trying to draw..."); CGContextRef ctx = UIGraphicsGetCurrentContext(); //get the current context CGContextClearRect(ctx, rect); //clear off the screen //draw a red square CGContextSetRGBFillColor(ctx, 255, 0, 0, 1); CGContextFillRect(ctx, CGRectMake(10, 10, 50, 50)); } // ==================================================================================================== #pragma mark - - (id)initWithFrame:(CGRect)frame { if (self = [super initWithFrame:frame]) { // Initialization code } return self; } - (void)dealloc { [super dealloc]; } @end So how would I go about to say, if the view is view one, draw a square if it's view two draw a circle. And so on.

    Read the article

  • Constant NSDictionary/NSArray for class methods.

    - by Jeff B
    I am trying to code a global lookup table of sorts. I have game data that is stored in character/string format in a plist, but which needs to be in integer/id format when it is loaded. For instance, in the level data file, a "p" means player. In the game code a player is represented as the integer 1. This let's me do some bitwise operations, etc. I am simplifying greatly here, but trying to get the point across. Also, there is a conversion to coordinates for the sprite on a sprite sheet. Right now this string-integer, integer-string, integer-coordinate, etc. conversion is taking place in several places in code using a case statement. This stinks, of course, and I would rather do it with a dictionary lookup. I created a class called levelInfo, and want to define the dictionary for this conversion, and then class methods to call when I need to do a conversion, or otherwise deal with level data. NSString *levelObjects = @"empty,player,object,thing,doohickey"; int levelIDs[] = [0,1,2,4,8]; // etc etc @implementation LevelInfo +(int) crateIDfromChar: (char) crateChar { int idx = [[crateTypes componentsSeparatedByString:@","] indexOfObject: crateChar]; return levelIDs[idx]; } +(NSString *) crateStringFromID: (int) crateID { return [[crateTypes componentsSeparatedByString:@","] objectAtIndex: crateID]; } @end Is there a better way to do this? It feels wrong to basically build these temporary arrays, or dictionaries, or whatever for each call to do this translation. And I don't know of a way to declare a constant NSArray or NSDictionary. Please, tell me a better way....

    Read the article

  • Threading best practice when using SFTP in C#

    - by Christian
    Ok, this is more one of these "conceptual questions", but I hope I got some pointers in the right direction. First the desired scenario: I want to query an SFTP server for directory and file lists I want to upload or download files simulaneously Both things are pretty easy using a SFTP class provided by Tamir.SharpSsh, but if I only use one thread, it is kind of slow. Especially the recursion into subdirs gets very "UI blocking", because we are talking about 10.000 of directories. My basic approach is simple, create some kind of "pool" where I keep 10 open SFTP connections. Then query the first worker for a list of dirs. If this list was obtained, send the next free workers (e.g. 1-10, first one is also free again) to get the subdirectory details. As soon as there is a worker free, send him for the subsubdirs. And so on... I know the ThreadPool, simple Threads and did some Tests. What confuses me a little bit is the following: I basically need... A list of threads I create, say 10 Connect all threads to the server If a connection drops, create a new thread / sftp client If there is work to do, take the first free thread and handle the work I am currently not sure about the implementation details, especially the "work to do" and the "maintain list of threads" parts. Is it a good idea to: Enclose the work in an object, containing a job description (path) and a callback Send the threads into an infinite loop with 100ms wait to wait for work If SFTP is dead, either revive it, or kill the whole thread and create a new one How to encapsulate this, do I write my own "10ThreadsManager" or are there some out Ok, so far... Btw, I could also use PRISM events and commands, but I think the problem is unrelated. Perhaps the EventModel to signal a done processing of a "work package"... Thanks for any ideas, critic.. Chris

    Read the article

  • problem with addressfilter in WCF

    - by Zé Carlos
    I've build my own WCF channel with all necessary stuff (like encoders, bindings, etc) to use it with ServiceHost. I just want to build the "channel stack" making no custumizations at "Service Model". To acomplish this, my encoder returns perfect ServiceModel.Messages with a XML infoset just like other channel does. Lets assume the following service implementation: [ServiceContract(Namespace = "http://MyNS")] public interface IService1 { [OperationContract(IsOneWay = true)] void dummy(); } public class Service1 : IService1 { public void dummy() { Console.WriteLine("In Service1:dummy()"); } } I used this service through other bindings and traced the following ServiceModel.Message contents (SOAP format): <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing"> <s:Header> <a:Action s:mustUnderstand="1">http://MyNS/IService1/dummy</a:Action> <a:To s:mustUnderstand="1">amqp://localhost</a:To> </s:Header> <s:Body> <dummy xmlns="http://MyNS"></dummy> </s:Body> </s:Envelope> Then (just to debug) i changed my encoder to allways return this message. When i use my custom channel the WCF's runtime replay with an faul message telling: "The message with To '' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree." I read that the default EndPointDispatcher.AddressFilter simply looks to the "TO" header and delivery the message to corresponding service. This is happening with other bindings, why not happens with my custom channel too? Is there any way to i check what default AddressFilter is doing? Thanks

    Read the article

  • Property being immediately reset by ApplicationSetting Property Binding

    - by Slider345
    I have a .net 2.0 windows application written in c#, which currently uses several project settings to store user configurations. The forms in the application are made up of lots of user controls, each of which have properties that need to be set to these project settings. Right now these settings are manually assigned to the user control properties. I was hoping to simplify the code by replacing the manual implementation with ApplicationSettings Property Bindings. However, my first property is not behaving properly at all. The setting is an integer, used to record a port number typed into a text box. The setting is bound to an integer property on a user control, and that property sets the Text property on a TextBox control. When I type a new value into the textbox at runtime, as soon as the textbox loses focus, it is immediately replaced by the original value. A breakpoint on the property shows that it is immediately setting the property to the setting from the properties collection after I set it. Can anyone see what I'm doing wrong? Here's some code: The setting: [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("1000")] public int Port { get{ return ((int)(this["Port"])); } set{ this["Port"] = value; } } The binding: this.ctrlNetworkConfig.DataBindings.Add(new System.Windows.Forms.Binding("PortNumber", global::TestProject.Properties.Settings.Default, "Port", true, System.Windows.Forms.DataSourceUpdateMode.OnPropertyChanged)); this.ctrlNetworkConfig.PortNumber = global::TestProject.Properties.Settings.Default.Port; And lastly, the property on the user control: public int PortNumber { get{ int port; if(int.TryParse(this.txtPortNumber.Text, out port)) return port; else return 0; } set{ txtPortNumber.Text = value.ToString(); } } Any thoughts? Thanks in advance for your help. EDIT: Sorry about the formatting, trying to correct.

    Read the article

  • Bouncycastle encryption algorithms not provided

    - by David Read
    I'm trying to use BouncyCastle with android to implement ECDH and EL Gamal. I've added the bouncycastle jar file (bcprov-jdk16-144.jar) and written some code that works with my computers jvm however when I try and port it to my android application it throws: java.security.NoSuchAlgorithmException: KeyPairGenerator ECDH implementation not found A sample of the code is: Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); java.security.KeyPairGenerator keyGen = org.bouncycastle.jce.provider.asymmetric.ec.KeyPairGenerator.getInstance("ECDH", "BC"); ECGenParameterSpec ecSpec = new ECGenParameterSpec("prime192v1"); keyGen.initialize(ecSpec, SecureRandom.getInstance("SHA1PRNG")); KeyPair pair = keyGen.generateKeyPair(); PublicKey pubk = pair.getPublic(); PrivateKey prik = pair.getPrivate(); I then wrote a simple program to see what encryption algorithms are available and ran it on my android emulator and on my computers jvm the code was: Set<Provider.Service> rar = new org.bouncycastle.jce.provider.BouncyCastleProvider().getServices(); Iterator<Provider.Service> ir = rar.iterator(); while(ir.hasNext()) System.out.println(ir.next().getAlgorithm()); On android I do not get any of the EC algorithms while ran normally on my computer it's fine. I'm also getting the following two errors when compiling for a lot of the bouncy castle classes: 01-07 17:17:42.548: INFO/dalvikvm(1054): DexOpt: not resolving ambiguous class 'Lorg/bouncycastle/asn1/ASN1Encodable;' 01-07 17:17:42.548: DEBUG/dalvikvm(1054): DexOpt: not verifying 'Lorg/bouncycastle/asn1/ess/OtherSigningCertificate;': multiple definitions What am I doing wrong?

    Read the article

  • maintaining a large list in python

    - by Oren
    I need to maintain a large list of python pickleable objects that will quickly execute the following algorithm: The list will have 4 pointers and can: Read\Write each of the 4 pointed nodes Insert new nodes before the pointed nodes Increase a pointer by 1 Pop nodes from the beginning of the list (without overlapping the pointers) Add nodes at the end of the list The list can be very large (300-400 MB), so it shouldnt be contained in the RAM. The nodes are small (100-200 bytes) but can contain various types of data. The most efficient way it can be done, in my opinion, is to implement some paging-mechanism. On the harddrive there will be some database of a linked-list of pages where in every moment up to 5 of them are loaded in the memory. On insertion, if some max-page-size was reached, the page will be splitted to 2 small pages, and on pointer increment, if a pointer is increased beyond its loaded page, the following page will be loaded instead. This is a good solution, but I don't have the time to write it, especially if I want to make it generic and implement all the python-list features. Using a SQL tables is not good either, since I'll need to implement a complex index-key mechanism. I tried ZODB and zc.blist, which implement a BTree based list that can be stored in a ZODB database file, but I don't know how to configure it so the above features will run in reasonable time. I don't need all the multi-threading\transactioning features. No one else will touch the database-file except for my single-thread program. Can anyone explain me how to configure the ZODB\zc.blist so the above features will run fast, or show me a different large-list implementation?

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • Is it possible to create a custom, animated MKAnnotationView?

    - by smountcastle
    I'm trying to simulate the user location animation in MapKit (where-by the user's position is represented by a pulsating blue dot). I've created a custom subclass of MKAnnotationView and in the drawRect method I'm attempting to cycle through a set of colors. Here's a simpler implementation of what I'm doing: - (void)drawRect:(CGRect)rect { float magSquared = event.magnitude * event.magnitude; CGContextRef context = UIGraphicsGetCurrentContext(); if (idx == -1) { r[0] = 1.0; r[1] = 0.5; r[2] = 0; b[0] = 0; b[1] = 1.0; b[2] = 0.5; g[0] = 0.5; g[1] = 0; g[2] = 1.0; idx = 0; } // CGContextSetRGBFillColor(context, 1.0, 1.0 - magSquared * 0.015, 0.211, .6); CGContextSetRGBFillColor(context, r[idx], g[idx], b[idx], 0.75); CGContextFillEllipseInRect(context, rect); idx++; if (idx > 3) idx = 0; } Unfortunately this just causes the annotations to be one of the 3 different colors and doesn't cycle through them. Is there a way to force the MKAnnotations to continually redraw so that it appears to be animated?

    Read the article

  • Comparing objects with IDispatch to get main frame only (BHO)

    - by shaimagz
    I don't know if anyone familiar with BHO (Browser Helper Object), but an expert in c++ can help me too. In my BHO I want to run the OnDocumentComplete() function only on the main frame - the first container and not all the Iframes inside the current page. (an alternative is to put some code only when this is the main frame). I can't find how to track when is it the main frame that being populated. After searching in google I found out that each frame has "IDispatch* pDisp", and I have to compare it with a pointer to the first one. This is the main function: STDMETHODIMP Browsarity::SetSite(IUnknown* pUnkSite) { if (pUnkSite != NULL) { // Cache the pointer to IWebBrowser2. HRESULT hr = pUnkSite->QueryInterface(IID_IWebBrowser2, (void **)&m_spWebBrowser); if (SUCCEEDED(hr)) { // Register to sink events from DWebBrowserEvents2. hr = DispEventAdvise(m_spWebBrowser); if (SUCCEEDED(hr)) { m_fAdvised = TRUE; } } } else { // Unregister event sink. if (m_fAdvised) { DispEventUnadvise(m_spWebBrowser); m_fAdvised = FALSE; } // Release cached pointers and other resources here. m_spWebBrowser.Release(); } // Call base class implementation. return IObjectWithSiteImpl<Browsarity>::SetSite(pUnkSite); } This is where I want to be aware whether its the main window(frame) or not: void STDMETHODCALLTYPE Browsarity::OnDocumentComplete(IDispatch *pDisp, VARIANT *pvarURL) { // as you can see, this function get the IDispatch *pDisp which is unique to every frame. //some code } I asked this question on Microsoft forum and I got an answer without explaining how to actually implement that: http://social.msdn.microsoft.com/Forums/en-US/ieextensiondevelopment/thread/7c433bfa-30d7-42db-980a-70e62640184c

    Read the article

  • How do I get google protocol buffer messages over a socket connection without disconnecting the clie

    - by Dan
    Hi there, I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows: iPhone: Person *person = [[[[Person builder] setId:1] setName:@"Bob"] build]; RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build]; NSData *data = [request data]; AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self]; if (![socket connectToHost:@"192.168.0.6" onPort:6666 error:nil]){ [self updateLabel:@"Problem connecting to socket!"]; } else { [self updateLabel:@"Sending data to server..."]; [socket writeData:data withTimeout:-1 tag:0]; [self updateLabel:@"Data sent, disconnecting"]; //[socket disconnect]; } Java: try { RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream()); Person person = wrapper.getPerson(); if (person != null) { System.out.println("Persons name is " + person.getName()); socket.close(); } On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with [request writeToOutputStream:[socket getCFWriteStream]]; Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method. Any help on the matter would be greatly appreciated! Cheers! Dan (EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)

    Read the article

  • Javascript Post Request like a Form Submit

    - by Joseph Holsten
    I'm trying to direct a browser to a different page. If I wanted a GET request, I might say document.location.href = 'http://example.com/q=a'; But the resource I'm trying to access won't respond properly unless I use a POST request. If this were not dynamically generated, I might use the HTML <form action="http://example.com/" method="POST"> <input type="hidden" name="q" value="a"> </form> Then I would just submit the form from the DOM. But really I would like JavaScript that allows me to say post_to_url('http://example.com/', {'q':'a'}); What's the best cross browser implementation? Edit I'm sorry I was not clear. I need a solution that changes the location of the browser, just like submitting a form. If this is possible with XMLHTTPRequest, it is not obvious. And this should not be asynchronous, nor use XML, so AJAX is not the answer.

    Read the article

  • Ninject 2 + ASP.NET MVC 2 Binding Types from External Assemblies

    - by Malkier
    Hi, I'M just trying to get started with Ninject 2 and ASP.NET MVC 2. I have followed this tutorial http://www.craftyfella.com/2010/02/creating-aspnet-mvc-2-controller.html to create a Controller Factory with Ninject and to bind a first abstract to a concrete implementation. Now I want to load a repository type from another assembly (where my concrete SQL Repositories are located) and I just cant get it to work. Here's my code: Global.asax.cs protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); } Controller Factory: public class Kernelhelper { public static IKernel GetTheKernel() { IKernel kernel = new StandardKernel(); kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); return kernel; } } public class MyControllerFactory : DefaultControllerFactory { private IKernel kernel = Kernelhelper.GetTheKernel(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { return controllerType == null ? null : (IController)kernel.Get(controllerType); } } In "MyAssembly" there is a Module: public class ExampleConfigModule : NinjectModule { public override void Load() { Bind<Domain.CommunityUserRepository>().To<SQLCommunityUserRepository>(); } } Now when I just slap in a MockRepository object in my entry point it works just fine, the controller, which needs the repository, works fine. The kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); also does its job and registers the module but as soon as I call on the controller which needs the repository I get an ActivationException from Ninject: No matching bindings are available, and the type is not self-bindable. Activation path: 2) Injection of dependency CommunityUserRepository into parameter _rep of constructor of type AccountController 1) Request for AccountController Can anyone give me a best practice example for binding types from external assemblies (which really is an important aspect of Dependency Injection)? Thank you!

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code (PART II)

    - by Pitelk
    Hi all , i cannot understand a certain part of the paper published by Donald Johnson about finding cycles (Circuits) in a graph. More specific i cannot understand what is the matrix Ak which is mentioned in the following line of the pseudo code : Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; to make things worse some lines after is mentins " for i in Vk do " without declaring what the Vk is... As far i have understand we have the following: 1) in general, a strong component is a sub-graph of a graph, in which for every node of this sub-graph there is a path to any node of the sub-graph (in other words you can access any node of the sub-graph from any other node of the sub-graph) 2) a sub-graph induced by a list of nodes is a graph containing all these nodes plus all the edges connecting these nodes. in paper the mathematical definition is " F is a subgraph of G induced by W if W is subset of V and F = (W,{u,y)|u,y in W and (u,y) in E)}) where u,y are edges , E is the set of all the edges in the graph, W is a set of nodes. 3)in the code implementation the nodes are named by integer numbers 1 ... n. 4) I suspect that the Vk is the set of nodes of the strong component K. now to the question. Lets say we have a graph G= (V,E) with V = {1,2,3,4,5,6,7,8,9} which it can be divided into 3 strong components the SC1 = {1,4,7,8} SC2= {2,3,9} SC3 = {5,6} (and their edges) Can anybody give me an example for s =1, s= 2, s= 5 what if going to be the Vk and Ak according to the code? The pseudo code is in my previous question in http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code and the paper can be found at http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code thank you in advance

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Need Advice on building the database. all in one table or split?

    - by Ibrahim Azhar Armar
    hello, i am developing an application for a real-estate company. the problem i am facing is about implementing the database. however i am just confused on which way to adopt i would appreciate if you could help me out in reasoning the database implementation. here is my situation. a) i have to store the property details in the database. b) the properties have approximately 4-5 categories to which it will belong for ex : resedential, commnercial, industrial etc. c) now the categories have sub-categories. for example. a residential category will have sub category such as. Apartment / Independent House / Villa / Farm House/ Studio Apartment etc. and hence same way commercial and industrial or agricultural will too have sub-categories. d) each sub-categories will have to store the different values. like a resident will have features like Bedrooms/ kitchens / Hall / bathroom etc. the features depends on the sub categories. for an example on how i would want to implement my application you can have a look at this site. http://www.magicbricks.com/bricks/postProperty.html i could possibly think of the solution like this. a) create four to five tables depending upon the categories which will be existing(the problem is categories might increase in the future). b) create different tables for all the features, location, price, description and merge the common property table into one. for example all the property will have the common entity such as location, total area, etc. what would you advice for me given the current situation. thank you

    Read the article

  • iOS5: Confusion with loadview and init and instance variables

    - by user743550
    I'm new to iOS5 and storyboarding. I noticed that if I declare an instance variables inside my viewcontroller .h file and set the values inside my init of .m file of my viewcontroller, whe the view of the viewcontroller is displayed, my instance variables show null inside viewDidLoad. In order for me to get myvariables, I'd need to do [self init] inside viewDidLoad. My questions are: @interface tableViewController : UITableViewController { NSMutableArray *myvariable; } @end @implementation tableViewController -(id)init { myvariable = [[NSMutableArray alloc]initWithObjects:@"Hi2",@"Yo2",@"whatsup2", nil]; } - (void)viewDidLoad { NSLog(@"%@",myvariable); // DISPLAYS NULL [super viewDidLoad]; } Why isn't my variables available in viewdidLoad when I declared and implemented in my .h and .m files? If that's the case, is viewDidLoad or viewWillAppear the common places to load the data for the viewcontroller? It looks like even if you instantiate a viewcontroller, the init function gets called but the viewDidLoad doesn't necessarily have the variables to be displayed. Where's the right place/methods to load the model(data) for my viewcontroller? Thanks in advance

    Read the article

  • Using Unit Tests While Developing Static Libraries in Obj-C

    - by macinjosh
    I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something. Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go). It seems my code is not actually being ran when the tests happen? Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here? P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.

    Read the article

  • C++ thread safety - exchange data between worker and controller

    - by peterchen
    I still feel a bit unsafe about the topic and hope you folks can help me - For passing data (configuration or results) between a worker thread polling something and a controlling thread interested in the most recent data, I've ended up using more or less the following pattern repeatedly: Mutex m; tData * stage; // temporary, accessed concurrently // send data, gives up ownership, receives old stage if any tData * Send(tData * newData) { ScopedLock lock(m); swap(newData, stage); return newData; } // receiving thread fetches latest data here tData * Fetch(tData * prev) { ScopedLock lock(m); if (stage != 0) { // ... release prev prev = stage; stage = 0; } return prev; // now current } Note: This is not supposed to be a full producer-consumer queue, only the msot recent data is relevant. Also, I've skimmed ressource management somewhat here. When necessary I'm using two such stages: one to send config changes to the worker, and for sending back results. Now, my questions assuming that ScopedLock implements a full memory barrier: do stage and/or workerData need to be volatile? is volatile necessary for tData members? can I use smart pointers instead of the raw pointers - say boost::shared_ptr? Anything else that can go wrong? I am basically trying to avoid "volatile infection" spreading into tData, and minimize lock contention (a lock free implementation seems possible, too). However, I'm not sure if this is the easiest solution. ScopedLock acts as a full memory barrier. Since all this is more or less platform dependent, let's say Visual C++ x86 or x64, though differences/notes for other platforms are welcome, too. (a prelimenary "thanks but" for recommending libraries such as Intel TBB - I am trying to understand the platform issues here)

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >