Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 638/750 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Telerik ASP.NET MVC2 Grid Delete Function with compound key.

    - by Dani
    I have a grid with a compound key: OrderId, ItemID. When I update the grid - the public ActionResult UpdateItemGridAjax(int OrderID, string ItemID) Gets both values from the grid. When I delete a row I get only the first one: public ActionResult DeleteItemGridAjax(int OrderID, string ItemID) Why is it happens and how can I get the ItemId value of the deleted row ? Grid Definition: <%= Html.Telerik().Grid<ItemsInOrderPOCO>() .Name("ItemsInOrderGrid") .DataKeys(dataKeys => { dataKeys.Add(e => e.OrderID); dataKeys.Add(e => e.ItemID); }) .ToolBar(commands => commands.Insert()) .DataBinding(dataBinding => { dataBinding.Ajax() //Ajax binding .Select("SelectItemGridAjax", "Orders", new { OrderID = Model.myOrder.OrderID }) .Insert("InsertItemGridAjax", "Orders", new { OrderID = Model.myOrder.OrderID }) .Update("UpdateItemGridAjax", "Orders") .Delete("DeleteItemGridAjax", "Orders"); }) .Columns(c => { c.Bound(o => o.ItemID); c.Bound(o => o.OrderID).Column.Visible = false; c.Bound(o => o.ItemDescription); c.Bound(o => o.NumOfItems); c.Bound(o => o.CostOfItem); c.Bound(o => o.TotalCost); c.Bound(o => o.SupplyDate); c.Command(commands => { commands.Edit(); commands.Delete(); }).Width(180).Title("Upadte"); })

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Behavior of Struts2 and convention-plugin when there is Index(extends ActionSupport)

    - by hanishi
    We have an Action class named 'Index' immediately under com.example.common.action and is annotated @ParentPackage('default') which is declared in package directive in struts.xml and has "/" for its namespace and extends "struts-default". It also declares @Result so that it responses with jsp files corresponding the string values returned by its execute() method. In our struts.xml, the following struts setting is configured along with other necessary configurations that are needed for convention-plugin. <constant name="struts.action.extension" value=","/> When accessing /my_context/none_existing_path, the request apparently hits this Index class and the contents of the jsp declared in the Index's @Result section gets returned. However, if we provide /my_context/, we receive the following error: HTTP Status 404-There is no Action mapped for namespace[/] and action name [] associated with context path [/my_context]. We want to know the reason why accessing /my_context/none_existing_path, where none_existing_path has no matching action, can fallback to Index class, but error is returned when when the URL requested is just /my_context/. Currently, our convention-plugin settings are declared as follows: <constant name="struts.convention.package.locators.basePackage" value="com.example"/> <constant name="struts.convention.package.locators" value="action"/> Strangely, if we changed the value of the struts.convention.package.locators.basePackage to om.example.common, in which the aforementioned Index file can be immediately found by narrowing the search scope, requesting /my_context/ displays the content of the jsps declared in @Result section of the Index class. However, as our action classes are distributed throughout the com.example.[a-z].action packages, where [a-z] represents the large volume of directories we have in our package structure, we cannot use this trick as a workaround. We have also tried placing index.jsp at the top level of the class path, and have the index.jsp redirect to /my_context/index, which worked but not what we want. Could this be a bug? We appreciate your responses. Thank you in advance. EDIT: JIRA registered, problem solved (from Struts 2.3.12 up)

    Read the article

  • JPA returning null for deleted items from a set

    - by Jon
    This may be related to my question from a few days ago, but I'm not even sure how to explain this part. (It's an entirely different parent-child relationship.) In my interface, I have a set of attributes (Attribute) and valid values (ValidValue) for each one in a one-to-many relationship. In the Spring MVC frontend, I have a page for an administrator to edit these values. Once it's submitted, if any of these fields (as <input> tags) are blank, I remove the ValidValue object like so: Set<ValidValue> existingValues = new HashSet<ValidValue>(attribute.getValidValues()); Set<ValidValue> finalValues = new HashSet<ValidValue>(); for(ValidValue validValue : attribute.getValidValues()) { if(!validValue.getValue().isEmpty()) { finalValues.add(validValue); } } existingValues.removeAll(finalValues); for(ValidValue removedValue : existingValues) { getApplicationDataService().removeValidValue(removedValue); } attribute.setValidValues(finalValues); getApplicationDataService().modifyAttribute(attribute); The problem is that while the database is updated appropriately, the next time I query for the Attribute objects, they're returned with an extra entry in their ValidValue set -- a null, and thus, the next time I iterate through the values to display, it shows an extra blank value in the middle. I've confirmed that this happens at the point of a merge or find, at the point of "Execute query ReadObjectQuery(entity.Attribute). Here's the code I'm using to modify the database (in the ApplicationDataService): public void modifyAttribute(Attribute attribute) { getJpaTemplate().merge(attribute); } public void removeValidValue(ValidValue removedValue) { ValidValue merged = getJpaTemplate().merge(removedValue); getJpaTemplate().remove(merged); } Here are the relevant parts of the entity classes: Entity @Table(name = "attribute") public class Attribute { @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "attribute") private Set<ValidValue> validValues = new HashSet<ValidValue>(0); } @Entity @Table(name = "valid_value") public class ValidValue { @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "attr_id", nullable = false) private Attribute attribute; }

    Read the article

  • How to map oracle timestamp to appropriate java type in hibernate?

    - by jschoen
    I am new to hibernate and I am stumped. In my database I have tables that have a columns of TIMESTAMP(6). I am using Netbeans 6.5.1 and when I generate the hibernate.reveng.xml, hbm.xml files, and pojo files it sets the columns to be of type Serializable. This is not what I expected, nor what I want them to be. I found this post on the hibernate forums saying to place: in the hibernate.reveng.xml file. In Netbeans you are not able to generate the mappings from this file (it creates a new one every time) and it does not seem to have the ability to re-generate them from the file either (at least according to this it is slated to be available in version 7). So I am trying to figure out what to do. I am more inclined to believe I am doing something wrong since I am new to this, and it seems like it would be a common problem for others. So what am I doing wrong? If I am not doing anything wrong, how do I work around this? I am using Netbeans 6.5, Oracle 10G, and I believe Hibernate 3 (it came with my netbeans). Edit: Meant to say I found this stackoverflow question, but it is really a different problem.

    Read the article

  • Best way to use Google's hosted jQuery, but fall back to my hosted library on Google fail

    - by Nosredna
    What would be a good way to attempt to load the hosted jQuery at Google (or other Google hosted libs), but load my copy of jQuery if the Google attempt fails? I'm not saying Google is flaky. There are cases where the Google copy is blocked (apparently in Iran, for instance). Would I set up a timer and check for the jQuery object? What would be the danger of both copies coming through? Not really looking for answers like "just use the Google one" or "just use your own." I understand those arguments. I also understand that the user is likely to have the Google version cached. I'm thinking about fallbacks for the cloud in general. Edit: This part added... Since Google suggests using google.load to load the ajax libraries, and it performs a callback when done, I'm wondering if that's the key to serializing this problem. I know it sounds a bit crazy. I'm just trying to figure out if it can be done in a reliable way or not. Update: jQuery now hosted on Microsoft's CDN. http://www.asp.net/ajax/cdn/

    Read the article

  • Managing StringBuilder Resources

    - by Jim Fell
    My C# (.NET 2.0) application has a StringBuilder variable with a capacity of 2.5MB. Obviously, I do not want to copy such a large buffer to a larger buffer space every time it fills. By that point, there is so much data in the buffer anyways, removing the older data is a viable option. Can anyone see any obvious problems with how I'm doing this (i.e. am I introducing more performance problems than I'm solving), or does it look okay? tText_c = new StringBuilder(2500000, 2500000); private void AppendToText(string text) { if (tText_c.Length * 100 / tText_c.Capacity > 95) { tText_c.Remove(0, tText_c.Length / 2); } tText_c.Append(text); } EDIT: Additional information: In this application new data is received very rapidly (on the order of milliseconds) through a serial connection. I don't want to populate the multiline textbox with this new information so frequently because that kills the performance of the application, so I'm saving it to a StringBuilder. Every so often, the application copies the contents of the StringBuilder to the textbox and wipes out the StringBuilder contents.

    Read the article

  • jquery form validation, and submit-on-change

    - by Bee
    I want to make all my settings forms across my site confirm that changes are saved, kinda like facebook does if you make changes in a form and then try to navigate away without saving. So I'm disabling the submit button on the forms only enabling if the values change. I then prompt the user to hit save before they leave the page in the case that they do have changes pending. var form = $('form.edit'); if(form.length > 0) { var orig_str = form.serialize(); $(':submit',form).attr('disabled','disabled'); form.on('change keyup', function(){ if(form.serialize() == orig_str) { setConfirmUnload(false); $(':submit',form).attr('disabled','disabled'); } else { setConfirmUnload(true); $(':submit',form).removeAttr('disabled') } }); $('input[type=submit]').click(function(){ setConfirmUnload(false); }); } function setConfirmUnload(on) { window.onbeforeunload = (on) ? unloadMessage : null; } function unloadMessage() { return 'If you navigate away from this page without saving your changes, they will be lost.'; } One of these forms needs some additional validation which I do using jQuery.validate library. e.g. if i wanted to ensure the user can't double submit the form on accident by double clicking on submit or somesuch (the actual validation in question is for a credit-card form and not this simple): $('form').validate({ submitHandler: function(form) { $(':submit', form).attr('disabled','disabled'); form.submit(); } }); Unfortunately both bits are trying to bind to submit button and they're interfering with each other such that the submit button remains disabled no matter what I do and it is impossible to submit the form at all. Is there some way to chain the validations together or something? Or some other way to avoid re-writing the validation code to repeat the "did you change anything in the form" business?

    Read the article

  • using asp.net membership provider in a dll

    - by Keith Barrows
    I've used Membership Providers in web apps over the last several years. I now have a new "request" for an internal project at work. They would like a service (not a web service) to do a quick authenticate against. Basically, exposing the ValidateUser(UserName, Password) method... I am building this in a DLL that will sit with our internal web site. What is the best approach to make this work? The DLL will not reference the web app and the web app will reference the DLL. How do I make the DLL aware of the Membership Provider? TIA PS: If this has been answered elsewhere please direct me to that... EDIT: I found an article on using ASP.NET Membership with WinForms and/or WPF applications. Unfortunately, these depend on an app.config file. A DLL appears to not use the app.config once published. If I am wrong, please set me straight! The article is here: http://aspalliance.com/1595_Client_Application_Services__Part_1.all

    Read the article

  • SQL Server 2008 R2

    - by kevchadders
    Hi all, I heard on the grapevine that Microsoft will be releasing SQL Server 2008 R2 within a year. Though I initially thought this was a patch for the just released 2008 version, I realised that it’s actually a completely different version that you would have to pay for. (Am I correct, if you had SQL Server 2008, would you have to pay again if you wanted to upgrade to 2008 R2?) If you’re already running SQL Server 2008, would you say it’s still worth the upgrade? Or does it depend on the size of your company and current setup. For what I’ve initially read, I do get the impression that this version would be more useful for the very high end hardware setup where you want to have very good scalability. With regard to programming, is there any extra enhancements/support in there which you’re aware of that will significantly help .NET Products/Web Development? Initially found a couple of links on it, but I was wondering if anyone had anymore info to share on subject as I couldn’t find nothing on SO about it? Thanks. New SQL Server R2 Microsoft Link on it. Microsoft SQL 2008 R2 EDIT: More information based on the Express Edition One very interesting thing about SQL Server 2008 R2 concerns the Express edition. Previous express versions of SQL Server Express had a database size limit of 4GB. With SQL Server Express 2008 R2, this has now been increased to 10GB !! This now makes the FREE express edition a much more viable option for small & medium sized applications that are relatively light on database requirements. Bear in mind, that this limit is per database, so if you coded your application cleverly enough to use a separate database for historical/archived data, you could squeeze even more out of it! For more information, see here: http://blogs.msdn.com/sqlexpress/archive/2010/04/21/database-size-limit-increased-to-10gb-in-sql-server-2008-r2-express.aspx

    Read the article

  • Compressing a database to a single file?

    - by Assimilater
    Hi all. In my contact manager program I have been storing information by reading and writing comma delimited files for each individual contact, and storing notes in a file for each note, and I'm wondering how I could go about shrinking them all into one file effectively. I have attempted using data entry tools in the visual studio toolbox and template class, though I have never quite figured out how to use them. What would be especially convenient is if I could store data as data type IOwner (a class I created) as opposed to strings. I'd also need to figure out how to tell the program what to do when a file is opened (I've noticed in the properties how to associate a file type with the program though am not sure how to tell it what to do when it's opened). Edit: How about rephrasing the question: I have a class IContact with various properties some of them being lists of other class objects. I have a public list of IContact. Can I write Contacts as List(Of IContact) to a file as opposed to a bunch of strings? Second part of the question: I have associated .cms files with my program. But if a user opens the file, what code should the program run through in an attempt to deal with the file? This file is going to contain data that the program needs to read, how do I tell it to read a file when the program is opened vicariously because the file was opened? Does this make the question clearer?

    Read the article

  • How can I include DBNull as a value in my strongly typed dataset?

    - by Beska
    I've created a strongly typed dataset (MyDataSet) in my .NET app. For the sake of simplicity, we'll say it has one DataTable (MyDataTable), with one column (MyCol). MyCol has its DataType property set to "System.Int32", and its AllowDBNull property set to "true". I'd like to manually create a new row, and add it to this dataset. I create the row without a problem, with something like: MyDataSet.MyDataTableRow myRow = MySimpleDataSet.MyDataTable.NewItemRow(); Fine. However, when I try to set the value to DBNull: myRow.MyCol = DBNull.Value; I'm told that I can't do it...that it can't cast that to an int. This makes sense, in a way, since I've defined it to be an int...but then how can I get DBNull in there? Am I not supposed to be able to have DBNull in there? Isn't that what the AllowDBNull property is for? I'm obviously missing something fundemental. Can someone help explain what it is? EDIT: I also tried entering "int?" as the DataType, but Visual Studio throws an error when I enter it, saying that "Column requires a valid DataType."

    Read the article

  • Using an initializer_list on a map of vectors

    - by Hooked
    I've been trying to initialize a map of <ints, vector<ints> > using the new 0X standard, but I cannot seem to get the syntax correct. I'd like to make a map with a single entry with key:value = 1:<3,4 #include <initializer_list> #include <map> #include <vector> using namespace std; map<int, vector<int> > A = {1,{3,4}}; .... It dies with the following error using gcc 4.4.3: error: no matching function for call to std::map<int,std::vector<int,std::allocator<int> >,std::less<int>,std::allocator<std::pair<const int,std::vector<int,std::allocator<int> > > > >::map(<brace-enclosed initializer list>) Edit Following the suggestion by Cogwheel and adding the extra brace it now compiles with a warning that can be gotten rid of using the -fno-deduce-init-list flag. Is there any danger in doing so?

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Fastest way to learn Flex and Java EE?

    - by LostWebNewbie
    Ok so me and my 2 friends have to make a webapp and well we think it's a good opportunity to learn JEE and Flex. The thing is we have very little knowledge about them and we have only 3 months to do it (it doesn't have to be super complicated). So my question is: what, in your opinion, would be the fastest way to learn them both? I guess we need to know some JSP, Serlvets, JPA (??), Flex, maybe JavaScript+CSS? Anything else like EJB? Should we also learn Spring (or Struts)? Obviously reading books would be a good idea, but I bet we won't make the deadline if we try to read all the books... @Edit: I know the basics of JSP/Servlets (read Head First JSP&Servlets) but I made only 1 project so far (a semi-decent hangman with JSP/Serlvets and JPA for persistance) that's about it. Flex - I'm just starting, I know really the basics of mxml and as3. As for why: 1) because we need to do a project for the uni and well I was thinking bout becoming a web dev (yup - jee+flex) after graduation, this is the perfect opportunity to learn them.

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • char array split ip with strtok

    - by user1480139
    I'm trying to split a IP address like 127.0.0.1 from a file: using following C code: pch2 = strtok (ip,"."); printf("\npart 1 ip: %s",pch2); pch2 = strtok (NULL,"."); printf("\npart 2 ip: %s",pch2); And IP is a char ip[500], that containt an ip. When printing it prints 127 as part 1 but as part 2 it prints NULL? Can someone help me? EDIT: Whole function: FILE *file = fopen ("host.txt", "r"); char * pch; char * pch2; char ip[BUFFSIZE]; IPPart result; if (file != NULL) { char line [BUFFSIZE]; while(fgets(line,sizeof line,file) != NULL) { if(line[0] != '#') { //fputs(line,stdout); pch = strtok (line," "); printf ("%s\n",pch); strncpy(ip, pch, sizeof(pch)-1); ip[sizeof(pch)-1] = '\0'; //pch = strtok (line, " "); pch = strtok (NULL," "); printf("%s",pch); pch2 = strtok (ip,"."); printf("\nDeel 1 ip: %s",pch2); pch2 = strtok (NULL,"."); printf("\nDeel 2 ip: %s",pch2); //if(strcmp(pch,url) == 0) //{ // result.part1 = //} } } fclose(file); }

    Read the article

  • How to Develop Dynamic Plug-In Based Functionality in C#

    - by Matthew
    Hello: I've been looking around for different methods of providing plug-in support for my application. Ideally, I will be creating a core functionality and based on different customers developing different plug-ins/addons such as importing, exporting data etc... What are the some methods available for making a C# application extensible via a plug-in architecture? Lets make up an example. If we have a program that consists of a main menu ( File, Edit, View, et al. ) along with a TreeView that displays different brands of cars grouped by manufacturer ( Ford, GM, for now). Right clicking on a car displays a context menu with the only option being 'delete car'. How could you develop the application so that plug-ins could be deployed so that you could allow one customer to see a new brand in the TreeView, let's say Honda, and also extent the car context menu so that they may now 'paint a car'? In Eclipse/RCP development this is easily handled by extension points and plug-ins. How does C# handle it? I've been looking into developing my own plug-in architecture and reading up on MEF.

    Read the article

  • NotApplicable marker with display pattern

    - by Jeff Barger
    Ok, so I'm pretty new to Cocoa, especially Bindings, but here's what I'm trying to do. I've got a Core Data model consisting of two entities: Category and Item. Category has a to-many relationship to Item called children, and Item has a relationship to Category called parent. Item has two attributes that Category does not have: quantity and desiredQuantity. What I'd like to do is display the tree in an NSOutlineView with two columns. One column is bound to the name of either the Category or the Item. I want to the second column to display something along the lines of 2 of 5 for the Item rows and nothing at all for the Category rows. When I use a display pattern, the Category rows end up showing of I noticed that if I don't use a display pattern for the second column, and instead just bind its Value to either the quantity or the desiredQuantity, the Category rows show nothing; its only if I try to use the display pattern. How can I make it display nothing for the Category rows and still use the display pattern? Or can I? Edit: I guess I didn't explain what the NotApplicable marker has to do with anything - Category does have properties for quantity and desiredQuantity, but they just return NSNotApplicableMarker.

    Read the article

  • C++ : integer constant is too large for its type

    - by user38586
    I need to bruteforce a year for an exercise. The compiler keep throwing this error: bruteforceJS12.cpp:8:28: warning: integer constant is too large for its type [enabled by default] My code is: #include <iostream> using namespace std; int main(){ unsigned long long year(0); unsigned long long result(318338237039211050000); unsigned long long pass(1337); while (pass != result) { for (unsigned long long i = 1; i<= year; i++) { pass += year * i * year; } cout << "pass not cracked with year = " << year << endl; ++year; } cout << "pass cracked with year = " << year << endl; } Note that I already tried with unsigned long long result(318338237039211050000ULL); I'm using gcc version 4.8.1 EDIT: Here is the corrected version using InfInt library http://code.google.com/p/infint/ #include <iostream> #include "InfInt.h" using namespace std; int main(){ InfInt year = "113"; InfInt result = "318338237039211050000"; InfInt pass= "1337"; while (pass != result) { for (InfInt i = 1; i<= year; i++) { pass += year * i * year; } cout << "year = " << year << " pass = " << pass << endl; ++year; } cout << "pass cracked with year = " << year << endl; }

    Read the article

  • Obtaining command line arguments in a QT application

    - by morpheous
    The following snippet is from a little app I wrote using the QT framework. The idea is that the app can be run in batch mode (i.e. called by a script) or can be run interactively. It is important therefore, that I am able to parse command line arguments in order to know which mode in which to run etc. [Edit] I am debugging using QTCreator 1.3.1 on Ubuntu Karmic. The arguments are passed in the normal way (i.e. by adding them via the 'Project' settings in the QTCreator IDE). When I run the app, it appears that the arguments are not being passed to the application. The code below, is a snippet of my main() function. int main(int argc, char *argv[]) { //Q_INIT_RESOURCE(application); try { QApplication the_app(argc, argv); //trying to get the arguments into a list QStringList cmdline_args = QCoreApplication::arguments(); // Code continues ... } catch (const MyCustomException &e) { return 1; } return 0; } [Update] I have identified the problem - for some reason, although argc is correct, the elements of argv are empty strings. I put this little code snippet to print out the argv items - and was horrified to see that they were all empty. for (int i=0; i< argc; i++){ std::string s(argv[i]); //required so I can see the damn variable in the debugger std::cout << s << std::endl; } Does anyone know what on earth is going on (or a hammer)?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • quartz2d translating the origin

    - by qwertyp96
    My understanding of quartz2d is that the code CGContextTranslateCTM(context, x, y); translates the coordinate system. I have a quartz2d view with lots of shapes on it, and the user needs to be able to pan around and zoom it. However, when using the CGContextScaleCTM(context, scaleX, scaleY); code, everything scales around the origin, not the center of the viewpoint the user is viewing. My solution to this was to use the following code: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 512.0+offset.x, 384.0+offset.y); //(512, 384) is the center of the iPad screen CGContextScaleCTM(context, scale, scale); You can translate around fine, but things still scale into the corner. What's wrong? EDIT: Oh. Wow. Duh. If you move the origin, the shapes move too, so you can't move it relative to the shapes. Now I know what's wrong, but how do I do that?(move the origin independently of the shapes)

    Read the article

  • Address book Phone number (+45) prefix causing crash!

    - by CCDEV
    Hi Guys... I am having trouble getting phone numbers from the iPhone Addressbook. There is no problem when the number do not contain a country code prefix like +45, but if it does, my app crashes... Is this a known issue? I haven't been able to find anything about it... Thanks EDIT: I get phonenumber like this: -(void)getContact { ABPeoplePickerNavigationController *pp = [[ABPeoplePickerNavigationController alloc] init]; pp.displayedProperties = [NSArray arrayWithObject:[NSNumber numberWithInt:kABPersonPhoneProperty]]; pp.peoplePickerDelegate = self; [self presentModalViewController:pp animated:YES]; [pp release]; } - (void)peoplePickerNavigationControllerDidCancel:(ABPeoplePickerNavigationController *)peoplePicker { // assigning control back to the main controller [self dismissModalViewControllerAnimated:YES]; } - (BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person { return YES; } -(BOOL)peoplePickerNavigationController:(ABPeoplePickerNavigationController *)peoplePicker shouldContinueAfterSelectingPerson:(ABRecordRef)person property:(ABPropertyID)property identifier:(ABMultiValueIdentifier)identifier { ABMultiValueRef phoneProperty = ABRecordCopyValue(person,property); saveString = (NSString *)ABMultiValueCopyValueAtIndex(phoneProperty,identifier); saveString = [saveString stringByReplacingOccurrencesOfString:@" " withString:@""]; nummerTextField.text = saveString; }

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >