Search Results

Search found 4685 results on 188 pages for 'proper'.

Page 141/188 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Sharepoint OLE DB - field not updateable

    - by Pandincus
    I need to write a simple C# .NET application to retrieve, update, and insert some data in a Sharepoint list. I am NOT a Sharepoint developer, and I don't have control over our Sharepoint server. I would prefer not to have to develop this in a proper sharepoint development environment simply because I don't want to have to deploy my application on the Sharepoint server -- I'd rather just access data externally. Anyway, I found out that you can access Sharepoint data using OLE DB, and I tried it successfully using some ADO.NET: var db = DatabaseFactory.CreateDatabase(); DataSet ds = new DataSet(); using (var command = db.GetSqlStringCommand("SELECT * FROM List")) { db.LoadDataSet(command, ds, "List"); } The above works. However, when I try to insert: using (var command = db.GetSqlStringCommand("INSERT INTO List ([HeaderName], [Description], [Number]) VALUES ('Blah', 'Blah', 100)")) { db.ExecuteNonQuery(command); } I get this error: Cannot update 'HeaderName'; field not updateable. I did some Googling and apparently you cannot insert data through OLE DB! Does anyone know if there are some possible workarounds? I could try using Sharepoint Web Services, but I tried that initially and was having a heck of a time authenticating. Is that my only option?

    Read the article

  • How to approach ninject container/kernel in inheritance situation

    - by Bas
    I have the following situation: class RuleEngine {} abstract class RuleImplementation {} class RootRule : RuleImplementation {} class Rule1 : RuleImplementation {} class Rule2 : RuleImplementation {} The RuleEngine is injected by Ninject and has a kernel at it's disposal, the role of the RuleEngine is to fire off the root rule, which on it's turn will load all the other rules also using Ninject, but using a different Module and creating a new Kernel. Now my question is, some of the rules require some dependencies which I want to inject using Ninject. What would be the best way to create the kernel for these rules and also still do proper unit testing with it? (the kernel shouldn't become a real pain in my tests) I've been thinking of the following possibilitys: The kernel that I use in the RuleEngine class could be tossed around to RuleImplementation and thus be available for every rule. But tossing around Kernels isn't really something I wish to do. When creating the rules, I could give the kernel (which creates the rules) as a constructor argument for each rule. I could create a method inside the RuleImplementation which creates a kernel and makes it possible for the rules to retrieve the kernel using a get() in the abstract class Whats the convention of passing around/creating kernels? Just create new kernels, or reuse them?

    Read the article

  • What is the correct way to synchronize a shared, static object in Java?

    - by johnrock
    This is a question concerning what is the proper way to synchronize a shared object in java. One caveat is that the object that I want to share must be accessed from static methods. My question is, If I synchronize on a static field, does that lock the class the field belongs to similar to the way a synchronized static method would? Or, will this only lock the field itself? In my specific example I am asking: Will calling PayloadService.getPayload() or PayloadService.setPayload() lock PayloadService.payload? Or will it lock the entire PayloadService class? public class PayloadService extends Service { private static PayloadDTO payload = new PayloadDTO(); public static void setPayload(PayloadDTO payload){ synchronized(PayloadService.payload){ PayloadService.payload = payload; } } public static PayloadDTO getPayload() { synchronized(PayloadService.payload){ return PayloadService.payload ; } } ... Is this a correct/acceptable approach ? In my example the PayloadService is a separate thread, updating the payload object at regular intervals - other threads need to call PayloadService.getPayload() at random intervals to get the latest data and I need to make sure that they don't lock the PayloadService from carrying out its timer task

    Read the article

  • Cocoa threading question.

    - by Steve918
    I would like to implement an observer pattern in Objective-C where the observer implements an interface similar to SKPaymentTransactionObserver and the observable class just extends my base observable. My observable class looks something like what is below. Notice I'm making copies of the observers before enumeration to avoid throwing an exception . I've tried adding an NSLock around add observers and notify observers, but I run into a deadlock. What would be the proper way to handle concurrency when observers are being added as notifications are being sent? @implementation Observable -(void)notifyObservers:(SEL)selector { @synchronized(self) { NSSet* observer_copy = [observers copy]; for (id observer in observer_copy) { if([observer respondsToSelector: selector]) { [observer performSelector: selector]; } } [observer_copy release]; } } -(void)notifyObservers:(SEL)selector withObject:(id)arg1 withObject:(id)arg2 { @synchronized(self) { NSSet* observer_copy = [observers copy]; for (id observer in observer_copy) { if([observer respondsToSelector: selector]) { [observer performSelector: selector withObject: arg1 withObject: arg2]; } } [observer_copy release]; } } -(void)addObserver:(id)observer { @synchronized(self) { [observers addObject: observer]; } } -(void)removeObserver:(id)observer { @synchronized(self) { [observers removeObject: observer]; } }

    Read the article

  • Static factory pattern with EJB3/JBoss

    - by purecharger
    I'm fairly new to EJBs and full blown application servers like JBoss, having written and worked with special purpose standalone Java applications for most of my career, with limited use of JEE. I'm wondering about the best way to adapt a commonly used design pattern to EJB3 and JBoss: the static factory pattern. In fact this is Item #1 in Joshua Bloch's Effective Java book (2nd edition) I'm currently working with the following factory: public class CredentialsProcessorFactory { private static final Log log = LogFactory.getLog(CredentialsProcessorFactory.class); private static Map<CredentialsType, CredentialsProcessor> PROCESSORS = new HashMap<CredentialsType, CredentialsProcessor>(); static { PROCESSORS.put(CredentialsType.CSV, new CSVCredentialsProcessor()); } private CredentialsProcessorFactory() {} public static CredentialsProcessor getProcessor(CredentialsType type) { CredentialsProcessor p = PROCESSORS.get(type); if(p == null) throw new IllegalArgumentException("No CredentialsProcessor registered for type " + type.toString()); return p; } However, in the implementation classes of CredentialsProcessor, I require injected resources such as a PersistenceContext, so I have made the CredentialsProcessor interface a @Local interface, and each of the impl's marked with @Stateless. Now I can look them up in JNDI and use the injected resources. But now I have a disconnect because I am not using the factory anymore. My first thought was to change the getProcessor(CredentialsType) method to do a JNDI lookup and return the SLSB instance that is required, but then I need to configure and pass the proper qualified JNDI name. Before I go down that path, I wanted to do more research on accepted practices. How is this design pattern treated in EJB3 / JEE?

    Read the article

  • Virtual Earth (Bing) Pin "moves" when zoom level changes

    - by Ali
    Hi guys, Created a Virtual Earth (Bing) map to show a simple pin at a particular point. Everything works right now - the pin shows up, the title and description pop up on hover. The map is initially fully zoomed into the pin, but the STRANGE problem is that when I zoom out it moves slightly lower on the map. So if I started with the pin pointing somewhere in Toronto, if I zoom out enough the pin ends up i the middle of Lake Ontario! If I pan the map, the pin correctly stays in its proper location. When I zoom back in, it moves slightly up until it's back to its original correct position! I've looked around for a solution for a while, but I can't understand it at all. Please help!! Thanks a lot! import with javascript: http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2 $(window).ready(function(){ GetMap(); }); map = new VEMap('birdEye'); map.SetCredentials("hash key from Bing website"); map.LoadMap(new VELatLong(43.640144 ,-79.392593), 1 , VEMapStyle.BirdseyeHybrid, false, VEMapMode.Mode2D, true, null); var pin = new VEShape(VEShapeType.Pushpin, new VELatLong(43.640144 ,-79.392593)); pin.SetTitle("Goes to Title of the Pushpin"); pin.SetDescription("Goes as Description."); map.AddShape(pin);

    Read the article

  • Winforms MVP with Castle Windsor - DI for subforms?

    - by Paul Kirby
    I'm building a winforms app utilizing passive-view MVP and Castle Windsor as an IoC container. I'm still a little new to dependency injection and MVP, so I'm looking for some clarity... I have a main form which contains a number of user controls, and also will bring up other dialogs (ex. Login, options, etc) as needed. My first question is...should I use constructor injection to get the presenters for these other views into the main view, or should I go back to a Service Locator-type pattern? (which I've been told is a big nono!) Or something else? Second question...the user controls need to communicate back to the main form when they are "completed" (definition of that state varies based on the control). Is there a standard way of hooking these up? I was thinking perhaps just wiring up events between the main presenter and the child presenters, but I'm not sure if this is proper thinking. I'd appreciate any help, it seems that the combination of MVP and IoC in winforms isn't exactly well-documented.

    Read the article

  • Error in WCF service - Silverlight client communication.

    - by David
    I created a WCF service and I planned to consume this in a Silverlight application. So I created the WCF service in the Website host project. The service is a simple WCF service that only returns a number - something like a Hello World WCF-SL. So after adding a service reference in the silverlight client project to the Service URI, after calling async the service method (by using the generated proxy), I get the following exception in the callback method: An error occurred while trying to make a request to URI 'http://localhost:4566/SLService.svc'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. I only created a HelloWorld WCF service with nothing else but a simple method that returns a dumb number and it's hosted on my locally. Must I have clientaccesspolicy.xml or crossdomain.xml? I acces my service locally. Every time I create a new simple/dumb WCF-SL solution, I get this error. I use VS2010 and Silverlight 4. I cannot get a simple/dumb WCF-SL solution working locally. Is there something wrong with the configuration? On another machine in the same network, it does work properly, so I assume something is misconfigured. Any thoughts?

    Read the article

  • Base64 en/decoding between xstream and iPhone SDK

    - by Matt McMinn
    I am passing a byte array from a java server to an iPad client in XML. The server is using xstream to convert the byte array to XML with the EncodedByteArrayConverter, which should convert the array to Base 64. Using xstream, I can decode the xml back to the proper byte array in a java client, but in the iPad client, I'm getting an invalid length error. To do my decoding, I'm using the code at the bottom of this page. The length of the string is indeed not a multiple of 4, so there must be something strange with my string - although since xstream can decode it just fine, I'm guessing there's just something I need to to on the iPad side to get it to decode. I've tried cutting off padding at the end of the string to get it down to the right size, and that does allow the decoder to work, but I end up with JPG's that have invalid headers, and are not displayable. On the server side, I'm using the following code: Object rtrn = getByteArray(); XStream xstream = new XStream(); String xml = xstream.toXML(rtrn); On the client side, I'm calling the above decoder from the XML parsing callback like this: -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { NSLog(@"Converting data; string length: %d", [string length]); //NSLog(@"%@", string); NSData *data = [Base64 decode:string]; NSLog(@"converted data length: %d", [data length]); } Any ideas what could be going wrong?

    Read the article

  • Adding a navigation bar to a web view in a tab bar app

    - by Henrik Erlandsson
    I've built the tab bar application in IB, with three tabs. The third tab happily displays a UIWebview where you can browse. The only thing missing is a back button, as not all web pages supply such a link. I need a navigation bar hooked up properly to the correct classes. I'm still a bit unsure about exactly how the hierarchy should look in interface builder and how to hook it up properly. Currently, the third tab is hooked up to a referencing outlet called 'webnews' in the class 'thirdviewcontroller', and the UIWebView (under a normal IUView in the hierarchy, which in turn is under the third tab bar controller) is connected to the webnews outlet. How do I make the navbar control the webview, and do I add code to the thirdviewcontroller.m that lets the navbar on the view control the webview 'back' function? What do I hook up as the delegate for it? Currently I have an app delegate, but that's hooked up to the tab bar. I'm not really after specific code as much as a general 'how it works' clue :) (Unless I can just add the navbar dynamically to the functioning app... but I don't think addSubView on viewWillAppear {} in thirdviewcontroller.m will create the proper functionality?) If I were to guess at the simplest solution, I'd guess create a navbarcontroller.h/.m, slap a navbar on the view in IB, connect the third tab to navbarcontroller, connect the navbar to the webview (?) and move the webnews outlet to navbarcontroller.h, and connect the webview to it. But I don't quite have the nerve to try, better to ask advice first.

    Read the article

  • BN_hex2bn magicaly segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • CKEditor plugin to close editor

    - by David Lawson
    The problem with destroying the editor from within a plugin, is that certain code tries to use the editor after the destructive plugin code, when in fact the editor is no longer there, causing errors and instability. I have come up with the following code for a plugin which closes the editor using both async:true and setTimeout: var cancelAddCmd = { modes : { wysiwyg:1, source:1 }, async: true, exec : function( editor ) { if(confirm('Are you sure you want to cancel editing and discard all content?')) setTimeout(function() { editor.destroy(); }, 100); } }; The problem I see is that it uses a dodgy setTimout call that would likely have mixed results depending on the computer's speed of execution - 100ms might not have passed by the time it would be OK to destroy the editor. Is there a proper way to destroy the editor from within a plugin? Even with async: true; and no setTimeout errors are created. Maybe a possible solution would be to stop any existing/any more code related to the editor from running afterwards, if possible? I have tried using events, like on('afterCommandExec', function(){ editor.destroy(); }) and some other events, but that hasn't helped much... doesn't seem like there is an event for when the editor has jumped out of its stack call for handling the button. And there is no way to stop execution by disposing of the CKEditor instance more properly?

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • Incoming connections from 2 different Internet connections

    - by RJClinton
    I am hosting a Gameserver on my Windows 2003 Server 32 Bit Server. Due to some limitations of my ISP, I have to take 2 Internet connections in order to satisfy my bandwidth requirement. I am facing serious problems here. The Internet is supplied using Wimax. I am given a POE and the RJ45 from both the POE (2 connections) are plugged into a Netgear 100mbps Switch and another RJ45 connects the server to the switch. I have tried to configure the IP addresses and gateway of both connections to the single NIC in the server. The IP series and the gateway of each Internet connection are very much different. I know that this is not the proper approach, because alternate gateways are meant to be only for backup in case of main gateway failure and the usage of the gateways depends upon the metric (correct me if I am wrong). I am planning to get a second PCI NIC so that I can connect each of the Internet connections into its respective NIC. If I use this approach, will I be able to accept connections on both NICs? Also, please suggest any other alternatives that I might use.

    Read the article

  • SSL certificates: No Client certificate key exhange

    - by user334246
    I am trying to access a WCF web service, that is using two way SSL encryption. When I try to call the service I get a System.ServiceModel.Security.SecurityNegotiationException: Could not establish secure channel for SSL/TLS with authority 'XXX.xx'. --- System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel. I have tried activating wire shark, to see what is sent to and from the server: I see a client hello and a server hello. But there is no client response to the server hello. I was expecting a "Certificate. Client key exchange. Change cipher. Encrypted handshake Message" package, but none is sent. I'm thinking it is a problem with the certificate sent by the server, that somehow my client server does not trusy it. Here is what I have already tried: I have created the certificate, through the proper authority, though I could have made a mistake in the certificate request without knowing it. I have added the two root certificates to: trusted root certificates, trusted publishers and trusted people. I have also added the client certificate to trusted people. My colleague has succeded in establishing connection on a win 2008 server (i'm using a 2003, because it is necessary for some odd reason - don't ask). I can't see any differences in our approach, so i'm a bit lost. Any help would be greatly appreciated.

    Read the article

  • Wordpress dynamic navigation function for highlighting single post tabs

    - by user269959
    I am trying to write a function that i can reuse in my wordpress themes that will allow me to build robust dynamic navigation menus. Here is what i have so far: function tab_maker($page_name, $href, $tabname) { //opens <li> tag to allow active class to be inserted if tab is on proper page echo "<li"; //checks that we are on current page and highlights tab as active if so if(is_page($page_name)){ echo " class='current_page_item'>"; } //closes <li> tab if not active else { echo ">"; } //inserts the link as $href and the name of the tab to appear as $tabname then closes <li> echo "<a href=$href>$tabname</a>"; echo "</li>"; } This code works as expected except i cant enable it to highlight for a single blog post as the page names are dynamic. I know about the wordpress function is_single() which ive used to implement this feature in previous nav menus but i cant find a way to integrate it into this function. appreciate any help!

    Read the article

  • Applying business logic to form elements in ASP.NET MVC

    - by Brettski
    I am looking for best practices in applying business logic to form elements in an ASP.NET MVC application. I assume the concepts would apply to most MVC patterns. The goal is to have all the business logic stem from the same place. I have a basic form with four elements: Textbox: for entering data Checkbox: for staff approval Checkbox: for client approval Button: for submitting form The textbox and two check boxes are fields in a database accessed using LINQ to SQL. What I want to do is put logic around the check boxes on who can check them and when. True table (little silly but it's an example): when checked || may check Staff || may check Client Staff | Client || Staff | Client || Staff | Client 0 0 || 1 0 0 1 0 1 || 0 0 0 1 1 0 || 1 0 0 1 1 1 || 0 0 0 1 There are to security roles, staff and client; a person's role determines who they are, the roles are maintained in the database alone with current state of the check boxes. So I can simply store the users roll in the view class and enable and disable check boxes based on their role, but this doesn't seem proper. That is putting logic in UI to control of which actions can be taken. How do I get most of this control down into the model? I mean I need to control which check boxes are enabled and then check the results in the model when the form is posted, so it seems the best place for it to originate. I am looking for a good approach to constructing this, something to follow as I build the application. If you know of some great references which explain these best practices that is really appreciated too.

    Read the article

  • Why should we call SuppressFinalize when we dont have a destructor

    - by somaraj
    I have few Question for which I am not able to get a proper answer . 1) Why should we call SuppressFinalize in the Dispose function when we dont have a destructor . 2) Dispose and finalize are used for freeing resources before the object is garbage collected. Whether it is managed or unmanaged resource we need to free it , then why we need a condition inside the dispose funtion , saying pass 'true' when we call this overriden function from IDisposable:Dispose and pass false when called from a finalize. See the below code i copied from net. class Test : IDisposable { private bool isDisposed = false; ~Test() { Dispose(false); } protected void Dispose(bool disposing) { if (disposing) { // Code to dispose the managed resources of the class } // Code to dispose the un-managed resources of the class isDisposed = true; } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } } what if I remove the boolean protected Dispose function and implement the as below. class Test : IDisposable { private bool isDisposed = false; ~Test() { Dispose(); } public void Dispose() { // Code to dispose the managed resources of the class // Code to dispose the un-managed resources of the class isDisposed = true; // Call this since we have a destructor . what if , if we dont have one GC.SuppressFinalize(this); } }

    Read the article

  • JMS equivalent in .Net

    - by rauts
    Hi All, I am trying to make an common abstract interface over the messaging infrastructure in our company. The design goal is to 2 fold. 1 is to hide the complexity of programming from the developers (i know its not very complex but still simplify it further) and 2 is to make the developers independent of the vendor specific messaging infrastructure (i.e. it can be MQSeries or EMS or MSMQ). The very common option is using the WCF layer over the messaging infrastructure. Use the MQSeries Custom channel for WCF or use EMS custom channle for WCF. But both are ruled out due to lack of proper version of MQSeries and EMS. Can someone please suggest what are the possible solutions to this problem. One which i can think of the to have a custom wrapper like JMS. Has anyone ever tried something similar before. Any help would be fantastic. by the way, i am trying to create this wrapper in C# 3.5. Regards

    Read the article

  • Right-Align and Vertical Align label with checkbox/radio button CSS

    - by Jon
    Hi Everyone, I'm very close and have this working in Safari, Firefox and IE8, however IE7 the labels and radio buttons do not align vertically. My HTML is: <div id="master-container"> <fieldset id="test"> <legend>This is a test of my CSS</legend> <ul class="inputlist"> <li> <label for="test1">Test 1</label> <input name="test1" id="test1" type="checkbox" disabled="disabled"/> </li> <li> <label for="test2">Test 2</label> <input name="test2" id="test2" type="checkbox" disabled="disabled"/> </li> </ul> </fieldset> </div> My CSS Is: html{font-family:Arial,Helvetica,sans-serif;} #master-container{width:615px;font-size:12px;} ul.inputlist{list-style-type:none;} ul.inputlist li{width:100%;margin-bottom:5px;} ul.inputlist li label{width:30px; text-align:right; margin-right:7px;float:left;} Any suggestions? Thanks! EDIT: Based on the suggestion to check the rest of my html and css. I updated the code above and now it accurately demonstrates the problem. If I take font-size out of #master-container it lines up but then it is not the proper font-size. I tried to add a font-size to ul.inputlist li input but that didn't help. Any suggestions? Thanks for your help everyone!

    Read the article

  • Bluetooth on 2.0+

    - by awiden
    I'm doing bluetooth development for connecting with a PC. I've basicly used the BTChatExample and changed the UUID to the standard PC SPP-profile. Trying to close a bluetooth application during a blocking read, by closing the BluetoothSocket will leave the Bluetooth stack in a unusable state. This can only be fixed by disabling and enabling bluetooth and restarting the application. Checking logcat, you can see that some of the internal methods are failing, leaving a open port. Any information on this? First of all there seams to be differences on how bluetooth is implemented on N1 and HTC Legend/Desire both running 2.1, do you know anything about this? Connecting isn't 100% reliable, sometimes I get a warning saying ~PortSystemContext init: FAILED. This leaves bluetooth unusable, and restarting is needed. Am I right in assuming that SPP is the only profile supported for use with the APIs? That's what the docs on the BluetoothAdapter says. I would love to discuss issues on bluetooth with a developer and iron out these bugs so that Android can have good proper BT support it deserves.

    Read the article

  • Passing a NSArray between classes

    - by Althane
    So, my second question based off of this application I'm teaching myself Objective C with. I have a Data Source class, which for now looks mostly like: - (id) init { if (self = [super init]){ listNames = [[NSArray alloc] initWithObjects: @"Grocery", @"Wedding", @"History class",@"CS Class",@"Robotics",@"Nuclear Sciences", @"Video",@"Library",@"Funeral", nil]; NSLog(@"%@",listNames); } return self; } The .h looks as follows: @interface MainViewDataSource : NSObject { NSArray *listNames; } @property (nonatomic, retain) NSArray *listNames; -(NSArray *)getListNames; So that's where I make it. Now the problem is that when I try to get the array listNames, it returns nothing. The following piece: NSArray* data = [listData listNames]; Is supposed to put the information in listNames in data, but... isn't. Since I'm rather used to JAva, I'm betting this is an Objective C quirk that I don't know how to fix. Which is why I'd be here asking for help. What's the proper way to pass around NSArrays like this?

    Read the article

  • Windows-mobile app won't run after being closed by Task Manager

    - by pithyless
    I've inherited some windows-mobile code that I've been bringing up-to-date. I've come across a weird bug, and I was hoping that even though a bit vague, maybe it will spark someone's memory: Running the app (which is basically a glorified Forms app with P/Invoke gps code), I switch to the task manager, and close the app via End Task. Seems to exit fine (no errors and disappears from Task Manager). Unfortunately, the app refuses to start a second time until I reboot the phone or reinstall the CAB. What's worse: this bug is reproducible on a HTC Diamond, but works fine (ie. can run again after EndTask) on an HTC HD2. The only thing I can think of is some kind of timing race between a Dispose() and the Task Manager. Any ideas? I'm also thinking of a workaround - I do have a working "Exit Application" routine that correctly cleans up the app; can I catch the EndTask event in the c# code in order to complete a proper cleanup? Maybe I'm just missing the pain point... all ideas welcome :)

    Read the article

  • Visual Studio project using the Quality Center API remains in memory

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • Using Hibernate's ScrollableResults to slowly read 90 million records

    - by at
    I simply need to read each row in a table in my MySQL database using Hibernate and write a file based on it. But there are 90 million rows and they are pretty big. So it seemed like the following would be appropriate: ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY); while (results.next()) storeInFile(results.get()[0]); The problem is the above will try and load all 90 million rows into RAM before moving on to the while loop... and that will kill my memory with OutOfMemoryError: Java heap space exceptions :(. So I guess ScrollableResults isn't what I was looking for? What is the proper way to handle this? I don't mind if this while loop takes days (well I'd love it to not). I guess the only other way to handle this is to use setFirstResult and setMaxResults to iterate through the results and just use regular Hibernate results instead of ScrollableResults. That feels like it will be inefficient though and will start taking a ridiculously long time when I'm calling setFirstResult on the 89 millionth row... UPDATE: setFirstResult/setMaxResults doesn't work, it turns out to take an unusably long time to get to the offsets like I feared. There must be a solution here! Isn't this a pretty standard procedure?? I'm willing to forgo Hibernate and use JDBC or whatever it takes. UPDATE 2: the solution I've come up with which works ok, not great, is basically of the form: select * from person where id > <offset> and <other_conditions> limit 1 Since I have other conditions, even all in an index, it's still not as fast as I'd like it to be... so still open for other suggestions..

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >