Search Results

Search found 6682 results on 268 pages for 'choose'.

Page 213/268 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • how to show integer values in JComboBox?

    - by Edan
    Hello, I would like to know how to set a JComboBox that contain integers values that I could save. Here is the definitions of values: public class Item { private String itemDesc; private int itemType; public static int ENTREE=0; public static int MAIN_MEAL=1; public static int DESSERT=2; public static int DRINK=3; private float price; int[] itemTypeArray = { ENTREE, MAIN_MEAL, DESSERT, DRINK }; Object[][] data = {{itemDesc, new Integer(itemType), new Float(price)}}; . . . } Now, I want the add a JComboBox that the user will choose 1 of the items (ENTREE, MAIN_MEAL...) and then I could set the number as an Integer. I know that JComboBox need to be something like that: JComboBox combo = new JComboBox(itemTypeArray.values()); JOptionPane.showMessageDialog( null, combo,"Please Enter Item Type", `JOptionPane.QUESTION_MESSAGE);` What am I doing wrong?

    Read the article

  • Modeling multiple polymorphic relationships using Hibernate

    - by f-potter
    Ruby on Rails has polymorphic relations which are really useful for implementing functionality such as commenting, tagging and rating to name a few. We can have a comment, tag or rating class which has a many to one polymorphic relationship with a commentable, taggable and rateable object. Also, a given domain object can choose to implement any combination of such relations. So, it can for example be commentable, taggable and rateable at the same time. I couldn't think up of a straightforward way to duplicate this functionality in Hibernate. Ideally, there would be a Comment class which will have a many to one relationship with a Commentable class and a Commentable class will conversely have a one to many relationship with Comments. It will be ideal if the concrete domain classes can inherit from a number of such classes, say Commentable and Taggable. Things seem a little complicated as a Java class can only extend one other class and some code might end up being duplicated across a number of classes. I wanted to know what are the best practices for modeling such relationships neatly and concisely using Hibernate?

    Read the article

  • What IDE setup and workflow is used for OSGi development?

    - by Falx
    I made quite a few easy OSGi test projects in Eclipse RCP. My typical workflow would always be: Make 3 different projects: APIproject, Clientproject and Serverproject Edit the MANIFEST.MF of APIproject to export the api package Edit the MANIFEST.MF file of Clientproject and Serverproject to add the required API package Choose "Run as..." "Plugin Framework" OSGi console starts in eclipse and everything seems to work I also tried wiring things by using Declarative Services, which worked well like this too. Now recently I wanted to try out iPOJO. The problem is that I get the feeling that I've been doing my OSGi development the wrong way. Can it be that I should instead make 1 project en make it work like no OSGi is involved. And then afterwards, just export each package to its own bundle by means of (for instance) the BNDL tool? Should development be done in a normal Eclipse (java, not RCP) or any other java IDE for that matter? So that's why I have these questions: What IDE setup is normally used to develop OSGi with iPOJO? And what is the normal workflow to be used when developing OSGi projects (maybe with iPOJO)?

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • Is the Windows dev environment worth the cost?

    - by MCS
    I recently made the move from Linux development to Windows development. And as much of a Linux enthusiast that I am, I have to say - C# is a beautiful language, Visual Studio is terrific, and now that I've bought myself a trackball my wrist has stopped hurting from using the mouse so much. But there's one thing I can't get past: the cost. Windows 7, Visual Studio, SQL Server, Expression Blend, ViEmu, Telerik, MSDN - we're talking thousands for each developer on the project! You're definitely getting something for your money - my question is, is it worth it? [Not every developer needs all the aforementioned tools - but have you ever heard of anyone writing C# code without Visual Studio? I've worked on pretty large software projects in Linux without having to pay for any development tool whatsoever.] Now obviously, if you're already a Windows shop, it doesn't pay to retrain all your developers. And if you're looking to develop a Windows desktop app, you just can't do that in Linux. But if you were starting a new web application project and could hire developers who are experts in whatever languages you want, would you still choose Windows as your development platform despite the high cost? And if yes, why?

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • FindControl in DataList Edit Mode

    - by Doug
    As a new .net/C# web begginner, I always get tripped up when I try to use FindControl. Blam -flat on my face. Here is my current FindControl problem: I have an .aspx page and Form, then ajax updatePanel, inside it there is my DataList (DataList1) that has an EditItemTemplate: that has the following: <EditItemTemplate> <asp:Label ID="thumbnailUploadLabel" runat="server" text="Upload a new thumbnail image:"/><br /> <asp:FileUpload ID="thumbnailImageUpload" runat="server" /> <asp:Button ID="thunbnailImageUploadButton" runat="server" Text="Upload Now" OnClick="thumbnailUpload"/><br /> </EditItemTemplate> In my C# code behind I have the OnClick code for the fileUpload object: protected void thumbnailUpload(object s, EventArgs e) { if (thumbnailImageUpload.HasFile) { //get name of the file & upload string imageName = thumbnailImageUpload.FileName; thumbnailImageUpload.SaveAs(MapPath("../../images/merch_sm/" + imageName)); //let'em know that it worked (or didn't) thumbnailUploadLabel.Text = "Image " + imageName + "has been uploaded."; } else { thumbnailUploadLabel.Text = "Please choose a thumbnail image to upload."; } So of course I'm getting "Object reference not set to an instance of an object" for the FileUpload and the Label. What is the correct syntax to find these controls, before dealing with them in the OnClick event? The only way Ive used FindControl is something like: label thumbnailUploadLabel = DataList1.FindControl("thumbnailUploadLabel") as Label; But of course this is throwing the "Object reference not set to an instance of an object" error. Any help is very much appreciated. (I've also seen the 'recursive' code out there that is supposed to make using FindControl easier. Ha! I'm so green at C# that I don't even know how to incorporate those into my project.) Thanks to all for taking a look at this.

    Read the article

  • How to handle refunds or rebates via a payment processor?

    - by Tai Squared
    I need to handle online payments and am trying to choose a payment processor. One requirement is to handle refunds and rebates to the customer. These won't always be at the time of sale, and not for the entire amount of the purchase. Is this something all payment processors handle? I don't want to have to do this manually as there may be many rebates, and they may be for relatively small amounts. I see PayPal has a refund API, but other parts of their site talk about sending a refund within 60 days. Is this something also required by the API? Amazon FPS also has a refund API that seems a bit more flexible. The Google Checkout refund has an amout field, but it's unclear to me if you can do a partial refund as the description reads "The refund-order command instructs Google Checkout to refund the buyer for a particular order." What are some things to look out for when looking for a payment processor that can handle rebates and refunds? Is there always a time limit in issuing these refunds? Is using a merchant account better for this type of process? I was hoping to avoid that due to the increased cost and complexity, but would consider it if it meets all of my requirements. Update It appears the refund process is fairly simple and handled by all processors. Is there any additional information on rebates? I would like to avoid a process of sending live checks to customers, but I will have to send rebates in some small amounts that may be a few months after the initial purchase.

    Read the article

  • Cocoa/AppleScript move file

    - by bogdan
    I have a list of file paths and a destination path. I need something (AppleScript, Cocoa) that will move the files from one location to an other. I first tried using the following AppleScript, just to see what happens: set the_folder to (choose folder) tell application "Finder" move selection to the_folder end tell The problem is that it just blindly tries to move a file, nothing like the way Finder actually moves files (i.e. if a file with that name already exists, the AppleScript just throws an error, while Finder would ask you if you want to replace the file). The solution I came up with involved NSFileManager. I won't post the code because it's quite long, but basically I just check if the file already exists before trying to move, and if it exists a NSAlert with Replace/Cancel buttons appear. I have 2 remaining problems: Authorization - if you try to do something to files where you don't have access, the Finder would ask you to authorize. My code just fails... Moving to external drives - when you try to move a file to a different drive, NSFileManager copies the file and then deletes the original. The problem is that NSFileManager doesn't provide anything which I could use to display a progress indicator of what's happening during the copy. Is there anything I could use that is able to move files without these problems? The way I see it, I'm pretty much stuck with checking if the files are writable by the current user and authorize NSFileManager if not (from my understanding of the Authorization Services, this will be quite hard to implement). Oh and, I would also need to check if the destination is on the same drive and if not, implement something with FSCopyObjectAsync so that it shows a progress indicator... Thanks!

    Read the article

  • Sys.WebForms.PageRequestManagerServerErrorException: .... The status code returned from the server w

    - by webnoob
    Hi All, I have seen a few posts regarding this issue but not one specific to my problem and I have no ideas as to what I need to do to debug this. I have some combo boxes on an aspx pages, when I select a value from the first one, it fills the second with value and so on with the third and fourth. This works with no problems until I wrap an asp.net UpdatePanel around the combo boxes and try to "ajaxify" the whole process so the page isn't dancing around. The exact error I get is: Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 404 Some things to note: I am using URL rewriting - This is what I think is causing the problem The error will occur whenever I choose a selection for a SECOND time. This means that I could select a value from the first combo box and get the same error (so it is happening on the second postback - No matter which combo box it's from). I have tried setting the EnablePartialRendering="false" on teh scriptmanager but as I said, it works when not using ajax, so I don't know how to debug the issue. My server is Windows 2008 running IIS& with ASP.NET 2.0. I would really appreciate your help Thanks in advance.

    Read the article

  • Choosing between .NET Service Bus Queues vs Azure Queue Service

    - by ChrisV
    Just a quick question regarding an Azure application. If I have a number of Web and Worker roles that need to communicate, documentation says to use the Azure Queue Service. However, I've just read that the new .NET Service Bus now also offers queues. These look to be more powerful as they appear to offer a much more detailed API. Whilst the .NSB looks more interesting it has a couple of issues that make me wary of using it in distributed application. (for example, Queue Expiration... if I cannot guarantee that a queue will be renewed on time I may lose it all!). Has anyone had any experience using either of these two technologies and could give any advice on when to choose one over the other. I suspect that whilst the service bus looks more powerful, as my use case is really just enabling Web/Worker roles to communicate between each other, that the Azure Queue Service is what I'm after. But I'm just really looking for confirmation of that before progamming myself in to a corner :-) Thanks in advance. UPDATE Have read up about the two systems over the break. It defo looks like .NET service bus is more specifically designed for integrating systems rather than providing a general purpose reliable messaging system. Azure Queues are distributed and so reliable and scalable in a way that .NSB queues are not and so more suitable for code hosted within Azure itself. Thanks for the responses.

    Read the article

  • ImpersonateLoggedOnUser and starting a new process that uses ocx fails.

    - by markus
    I write a c++ windows application (A), that uses LogonUser, LoadUserProfile and ImpersonateLoggedOnUser to gain the rights of another user (Y). Meaning the A starts using the user that is logged on on the workstation (X). If the user wants to elevate his rights he can just press a button and logon as another user without having to log himself out of windows and back in. The situation now is (according to the return values of the functions): LogonUser works, LoadUserProfile works and ImpersonateLoggedOnUser works as well. After the impersonation I start another process. This process is an application (B) that needs an OCX control. This fails and the application tells me that the .oxc file is not properly installed. The thing is, if I start B directly as the user that is logged on to the machine (X), it works. If I start B directly as the user (Y) to which I want to elevate my rights using A, it works. If I am logged in as (X) and choose "run as" (Y) in the explorer, it works! Do you know which steps I need to do to do the same as the "run as" dialog from windows?

    Read the article

  • DataGrid: dynamic DataTemplate for dynamic DataGridTemplateColumn

    - by Lukas Cenovsky
    I want to show data in a datagrid where the data is a collection of public class Thing { public string Foo { get; set; } public string Bar { get; set; } public List<Candidate> Candidates { get; set; } } public class Candidate { public string FirstName { get; set; } public string LastName { get; set; } ... } where the number of candidates in Candidates list varies at runtime. Desired grid layout looks like this Foo | Bar | Candidate 1 | Candidate 2 | ... | Candidate N I'd like to have a DataTemplate for each Candidate as I plan changing it during runtime - user can choose what info about candidate is displayed in different columns (candidate is just an example, I have different object). That means I also want to change the column templates in runtime although this can be achieved by one big template and collapsing its parts. I know about two ways how to achieve my goals (both quite similar): Use AutoGeneratingColumn event and create Candidates columns Add Columns manually In both cases I need to load the DataTemplate from string with XamlReader. Before that I have to edit the string to change the binding to wanted Candidate. Is there a better way how to create a DataGrid with unknown number of DataGridTemplateColumn? Note: This question is based on dynamic datatemplate with valueconverter

    Read the article

  • .NET proxy detection

    - by Ziplin
    I am having an issue with .NET detecting the proxy settings configured through internet explorer. I'm writing a client application that supports proxies, and to test I set up an array of 9 squid servers to support various authentication methods for HTTP and HTTPs. I have a script that updates IE to whichever configuration I choose (which proxy, detection via "Auto", PAC, or hardcode). I have tried the 3 methods below to detect the IE configuration through .NET. On occassion I notice that .NET picks up the wrong set of proxy servers. IE has the correct settings, and if I browse the web with IE, I can see I am hitting the correct servers via wireshark. WebRequest.GetSystemWebProxy().GetProxy(destination); GlobalProxySelection.Select.GetProxy(destination); WebRequest.DefaultWebProxy Here are the following tips I have: My script sets a PAC file on a webserver, and updates the configuration in IE, then clears IE's cache .NET seems to get "stuck" on a certain proxy configuration, and I have to set another configuration for .NET to realize there was a change. Occasionally it seems to pick some random set of servers (I'm sure they're not random, just a set of servers I used once and are in some cached PAC file or something). As in, I will check the proxy for the destination "https://www.secure.com" and I may have IE configured for and thus expect to get "http://squidserver:18" and instead it will return "http://squidserver:28" (port 18 runs NTLM, 28 runs without authentication). All the squid servers work. This does not appear to be an issue on XP, only Vista, 2003, and windows 7. Hardcoding the proxy servers in IE ALWAYS works Time always solves the issue - if I leave the computer for about 20 or 30 minutes and come back, .NET picks up the correct proxy settings, as if a cached PAC script expired.

    Read the article

  • Need some help understanding this problem about maximizing graph connectivity

    - by Legend
    I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually. Problem I am trying to solve: 1. Constructing the dependency graph Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that node 3 depends on node 4 node 2 depends on node 3 node 3 depends on node 5 But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system. 2. Execute the request order Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. I am not sure if this is a really a problem but I somehow have a feeling that there might exist more than one order in which case, it is required to choose the best order. First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions? PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Visual Studio opening .xml files in Notepad

    - by Portman
    So I'm happily working on a project making heavy use of custom .xml configuration files this morning. All of a sudden, whenever I double-click an .xml file in Solution Explorer, it opens in Notepad instead of within Visual Studio. Thinking that it was the Windows file associations, I right-clicked on a file in Explorer, selected Open With Choose Defaults, and selected Visual Studio 2008. But the problem remains -- now when I open a file from Explorer, Visual Studio Opens, then it opens Notepad. Needless to say, this is very frustrating, and Google is not much help. Has anyone else ever had this problem, and what did you do about it? Notes: This only happens for .xml files. Other text files (.config, .txt) open within Visual Studio just fine. This has nothing to do with Windows file associations, as Windows open up VS2008 just as it should. This is some crazy problem internal to Visual Studio. I've also tried Tools Options General Restore File Associations. No luck. Nothing present in Tools Options Text Editor File Extension This is what my "Open With" menu looks like for .xml files. As you can see, "XML Editor" is set to the default.

    Read the article

  • What is the best anti-crack scheme for your trial or subscription software?

    - by gmatt
    Writing code takes time and effort and just like any other human being we need to live by making an income (save for the few that are actually self sustainable.) Here are 3 general schemes to make a living: Independent developers can offer a trial then purchase scheme. An alternative is an open source base application with pay extensions. A last (probably least popular with customers) scheme is to enforce some kind of subscription. Then the price of the software pales in comparison to the long term subscription fees. So, my question would be a hypothetical one. Suppose that you invest thousands of hours into developing an application. Now suppose you can choose any one of the three options to make a living off this application--or any other option you want--and suppose you have a very real fear of loosing 80% of your revenue to a cracked version if one can be made. To be clear this application does not require the internet to perform all its useful functions, that is, your application is a prime candidate to be a cracked release on some website. Which option would you feel most comfortable with defending yourself against this possible situation and briefly describe why this option would be the best.

    Read the article

  • Adding a Taxonomy Filter to a Custom Post Type

    - by ken
    There is an amazing conversation from about two years ago on the Wordpress Answer site where a number of people came up with good solutions for adding a taxonomy filter to the admin screen for your custom post types (see URL for screen I'm referring to): http://[yoursite.com]/wp-admin/edit.php?s&post_status=all&post_type=[post-type] Anyway, I loved Michael's awesome contribution but in the end used Somatic's implementation with the hierarchy option from Manny. I wrapped it in a class - cuz that's how I like to do things -- and it ALMOST works. The dropdown appears but the values in the dropdown are all looking in the $_GET property for the taxonomies slug-name that you are filtering by. For some reason I don't get anything. I looked at the HTML of the dropdown and it appears ok to me. Here's a quick screen shot for some context: You can tell from this that my post-type is called "exercise" and that the Taxonomy I'm trying to use as a filter is "actions". Here then is the HTML surrounding the dropdown list: <select name="actions" id="actions" class="postform"> <option value="">Show all Actions</option> <option value="ate-dinner">Ate dinner(1)</option> <option value="went-running">Went running(1)</option> </select> I have also confirmed that all of the form elements are within the part of the DOM. And yet if I choose "Went running" and click on the filter button the URL query string comes back without ANY reference to what i've picked. More explicitly, the page first loads with the following URL: /wp-admin/edit.php?post_type=exercise and after pressing the filter button while having picked "Went Running" as an option from the actions filter: /wp-admin/edit.php?s&post_status=all&post_type=exercise&action=-1&m=0&actions&paged=1&mode=list&action2=-1 actually you can see a reference to an "actions" variable but it's set to nothing and as I now look in detail it appears that the moment I hit "filter" on the page it resets the filter dropdown to the default "Show All Actions". Can anyone help me with this?

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • Puting contact number into field

    - by dfilkovi
    I have this code that has one button that let's me choose an entry from contacts, and passes that choesn contact to onActivityResult function. My question is how do I select data of that single contact when all that is passed is an Intent in data variable. That data variable, if converted to string shows something like "dat: content://contacts/people/4" so I see that selected contact is somehow passed, but what now? How to get that data? And also all I found by googling was examples with deprecated class People, so I don't know how too use new classes. Please help. Thank you. public class HelloAndroid extends Activity { private static final int CONTACT_ACTIVITY = 100; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); final Button contactButton = (Button) findViewById(R.id.pick_contact_button); contactButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { Uri uri = Uri.parse("content://contacts/people"); Intent contacts_intent = new Intent(Intent.ACTION_PICK, uri); startActivityForResult(contacts_intent, CONTACT_ACTIVITY); } }); } public void onActivityResult(int requestCode, int resultCode, Intent data){ super.onActivityResult(requestCode, resultCode, data); switch(requestCode){ case(CONTACT_ACTIVITY): { if(resultCode == Activity.RESULT_OK) { alertText(data.toString()); } break; } } } }

    Read the article

  • Easiest RPC client method in PHP

    - by T.K.
    I've been asked to help a friend's company to bring up a web application. I have very limited time and I reluctantly accepted the request, at one condition. As most of the logic goes on in the back-end, I suggested that I would finish the complete back-end only, allowing a front-end developer to simply interface with my backend. I plan to do the back-end in Java EE or Python (with Pylons). It does not really matter at this point. I plan to have my back-end completely ready and unit-tested, so that my input will hardly be needed after my work is done. I know they have a PHP programmer, but as far as I could tell he is a real rookie. I want him to basically interface with my backend's services in the easiest possible way, with no way of him "stuffing" it up. It's basically a CRUD-only application. I could implement the backend as accessible through a webservice such as XML-RPC or SOAP. Even a RESTful API could be possible. However, my main objective is to make something that complete "noob" PHP programmer can easily interface with without getting confused. Preferably I do not even want to talk to him because I generally have an extremely busy schedule, and doing "support calls" is not something I am willing to do. Which approach should I choose? I would welcome any suggestions and inputs!

    Read the article

  • why won't Eclipse use the compiler I specify for my project?

    - by codeman73
    I'm using Eclipse 3.3. In my project, I've set the compiler compliance level to 5.0 In the build path for the project. I've added the Java 1.5 JDK in the Installed JREs section and am referencing that System Library in my project build path. However, I'm getting compile errors for a class that implements PreparedStatement for not implementing abstract methods that only exist in Java 1.6 PreparedStatement. Specifically, the methods setAsciiStream(int, InputStream, long) and setAsciiStream(int, InputStream) Strangely enough, it worked when we were compiling it against Java 1.4, which it was originally written for. We added the JREs for Java 1.4 and referenced that system library in the project, and set the project's compiler level to 1.4, and it works fine. But when I do the same changes to try to point to Java 5.0, it instead uses Java 6. Any ideas why? I wrote a similar question earlier, here: http://stackoverflow.com/questions/2540548/how-do-i-get-eclipse-to-use-a-different-compiler-version-for-java I know how you're supposed to choose a different compiler but it seems Eclipse isn't taking it. It seems to be defaulting to Java 6, even though I have deleted all Java 6 JDKs and JREs that I could find. I've also updated the -vm option in my eclipse.ini to point to the Java5 JDK.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >