Search Results

Search found 17734 results on 710 pages for 'values'.

Page 550/710 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Converting an integer to a boxed enum type only known at runtime

    - by Marc Gravell
    Imagine we have an enum: enum Foo { A=1,B=2,C=3 } If the type is known at compile-time, a direct cast can be used to change between the enum-type and the underlying type (usually int): static int GetValue() { return 2; } ... Foo foo = (Foo)GetValue(); // becomes Foo.B And boxing this gives a box of type Foo: object o1 = foo; Console.WriteLine(o1.GetType().Name); // writes Foo (and indeed, you can box as Foo and unbox as int, or box as int and unbox as Foo quite happily) However (the problem); if the enum type is only known at runtime things are... trickier. It is obviously trivial to box it as an int - but can I box it as Foo? (Ideally without using generics and MakeGenericMethod, which would be ugly). Convert.ChangeType throws an exception. ToString and Enum.Parse works, but is horribly inefficient. I could look at the defined values (Enum.GetValues or Type.GetFields), but that is very hard for [Flags], and even without would require getting back to the underlying-type first (which isn't as hard, thankfully). But; is there a more direct to get from a value of the correct underlying-type to a box of the enum-type, where the type is only known at runtime?

    Read the article

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • Go XML Unmarshal example doesn't compile

    - by marketer
    The Xml example in the go docs is broken. Does anyone know how to make it work? When I compile it, the result is: xmlexample.go:34: cannot use "name" (type string) as type xml.Name in field value xmlexample.go:34: cannot use nil as type string in field value xmlexample.go:34: too few values in struct initializer Here is the relevant code: package main import ( "bytes" "xml" ) type Email struct { Where string "attr"; Addr string; } type Result struct { XMLName xml.Name "result"; Name string; Phone string; Email []Email; } var buf = bytes.NewBufferString ( ` <result> <email where="home"> <addr>[email protected]</addr> </email> <email where='work'> <addr>[email protected]</addr> </email> <name>Grace R. Emlin</name> <address>123 Main Street</address> </result>`) func main() { var result = Result{ "name", "phone", nil } xml.Unmarshal ( buf , &result ) println ( result.Name ) }

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • Analyzing data from same tables in diferent db instances.

    - by Oscar Reyes
    Short version: How can I map two columns from table A and B if they both have a common identifier which in turn may have two values in column C Lets say: A --- 1 , 2 B --- ? , 3 C ----- 45, 2 45, 3 Using table C I know that id 2 and 3 belong to the same item ( 45 ) and thus "?" in table B should be 1. What query could do something like that? EDIT Long version ommited. It was really boring/confusing EDIT I'm posting some output here. From this query: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) I have the following parts: select distinct( activityin ) from taskperformance where rolein = 0 Output: http://question1337216.pastebin.com/f5039557 select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) Output: http://question1337216.pastebin.com/f6cef9393 And finally: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) Output: http://question1337216.pastebin.com/f346057bd Take for instace activityin 335 from first query ( from taskperformance B) . It is present in actvities from A. But is not in taskperformace in A ( but a the related activities: 92, 208, 335, 595 ) Are present in the result. The corresponding role in is: 1

    Read the article

  • What are the correct bindings for an NSComboBox for use with Core Data

    - by theMikeSwan
    Imagine if you will a Core Data app with two entities (Employee, and Department). Employees have a to-one relationship with department (department) and the inverse is a to-many relationship (employees). In the UI you can select individual Employee entities and edit the details in a detail area (there are of course other attributes and there is UI for adding and editing Department entities). When using a popup button the bindings are: content = PopUpArrayController.arrangedObjects content values = PopUpArrayController.arrangedObjects.name (name is an NSString) selected object = EmployeeArrayController.selection.department.name This allows for viewing of all departments in the popup menu, correct selection of the current Employee's department, and allows that department to be changed as expected. The goal is to change this for an NSComboBox so that the user can tab to the box and type the department name in without switching to the mouse. I have tried numerous different bindings to accomplish this. I even had it work for one run with these bindings: content = PopUpArrayController.arrangedObjects.name value = EmployeeArrayController.selection.department.name At least once this worked as expected (it even added a new department when the entered text did not match any existing department). Now however it will display the available Departments and auto complete but will not update the model with the correct value when the value is changed in the combo box. If the Department is set or changed with the popup the correct department is shown in the combo box. Does anyone know what I am missing? Thanks.

    Read the article

  • Why so much stack space used for each recursion?

    - by Harvey
    I have a simple recursive function RCompare() that calls a more complex function Compare() which returns before the recursive call. Each recursion level uses 248 bytes of stack space which seems like way more than it should. Here is the recursive function: void CMList::RCompare(MP n1) // RECURSIVE and Looping compare function { auto MP ne=n1->mf; while(StkAvl() && Compare(n1=ne->mb)) RCompare(n1); // Recursive call ! } StkAvl() is a simple stack space check function that compares the address of an auto variable to the value of an address near the end of the stack stored in a static variable. It seems to me that the only things added to the stack in each recursion are two pointer variables (MP is a pointer to a structure) and the stuff that one function call stores, a few saved registers, base pointer, return address, etc., all 32-bit (4 byte) values. There's no way that is 248 bytes is it? I don't no how to actually look at the stack in a meaningful way in Visual Studio 2008. Thanks

    Read the article

  • Troubles Iterating Over A HashMap with JSF, MyFaces & Facelets

    - by Lee Theobald
    Hi all, I'm having some trouble looping over a HashMap to print out it's values to the screen. Could someone double check my code to see what I'm doing wrong. I can't seem to find anything wrong but there must be something. In a servlet, I am adding the following to the request: Map<String, String> facetValues = new HashMap<String, String>(); // Filling the map req.setAttribute(facetField.getName(), facetValues); In one case "facetField.getName()" evaluates to "discipline". So in my page I have the following: <ui:repeat value="${requestScope.discipline}" var="item"> <li>Item: <c:out value="${item}"/>, Key: <c:out value="${item.key}"/>, Value: <c:out value="${item.item}"/></li> </ui:repeat> The loop is ran once but all the outputs are blank?!? I would have at least expected something in item if it's gone over the loop once. Checking the debug popup for Facelets, discipline is there and on the loop. Printing it to the screen results in something that looks like a map to me (I've shortened the output) : {300=0, 1600=0, 200=0, ... , 2200=0} I've also tried with a c:forEach but I'm getting the same results. So does anyone have any ideas where I'm going wrong? Thanks for any input, Lee

    Read the article

  • How can I write classes that don't rely on "global" variables?

    - by Joel
    When I took my first programming course in university, we were taught that global variables were evil & should be avoided at all cost (since you can quickly develop confusing and unmaintainable code). The following year, we were taught object oriented programming, and how to create modular code using classes. I find that whenever I work with OOP, I use my classes' private variables as global variables, i.e., they can be (and are) read and modified by any function within the class. This isn't really sitting right with me, as it seems to introduce the same problems global variables had in languages like C. So I guess my question is, how do I stop writing classes with "global" variables? Would it make more sense to pretend I'm writing in a functional language? By this I mean having all functions take parameters & return values instead of directly modifying class variables. If I need to set any fields, I can just take the output of the function and assign it instead of having the function do it directly. This seems like it might make more maintainable code, at least for larger classes. What's common practice? Thanks!

    Read the article

  • How do I create a multi-level TreeView using F#?

    - by TwentyMiles
    I would like to display a directory structure using Gtk# widgets through F#, but I'm having a hard time figuring out how to translate TreeViews into F#. Say I had a directory structure that looks like this: Directory1 SubDirectory1 SubDirectory2 SubSubDirectory1 SubDirectory3 Directory2 How would I show this tree structure with Gtk# widgets using F#? EDIT: gradbot's was the answer I was hoping for with a couple of exceptions. If you use ListStore, you loose the ability to expand levels, if you instead use : let musicListStore = new Gtk.TreeStore([|typeof<String>; typeof<String>|]) you get a layout with expandable levels. Doing this, however, breaks the calls to AppendValues so you have to add some clues for the compiler to figure out which overloaded method to use: musicListStore.AppendValues (iter, [|"Fannypack" ; "Nu Nu (Yeah Yeah) (double j and haze radio edit)"|]) Note that the columns are explicitly passed as an array. Finally, you can nest levels even further by using the ListIter returned by Append Values let iter = musicListStore.AppendValues ("Dance") let subiter = musicListStore.AppendValues (iter, [|"Fannypack" ; "Nu Nu (Yeah Yeah) (double j and haze radio edit)"|]) musicListStore.AppendValues (subiter, [|"Some Dude"; "Some Song"|]) |> ignore

    Read the article

  • DataGridView in VB.net will not allow me to update

    - by Marc
    I have a datagridview with a dataTable as the dataSource. The user can add new rows to the datadridview, but I dont display the primary key column (for obvious reasons) and set it to .visible = false. When I need to update the information in the datagridview to the database, I use the sqlClient.SqlCommandBuilder to then update the underlying datasource (the dataTable mentioned above). Now, because the hidden column is the primary key, I loop through the datagridview and programically add the required primary key field to each new row that does not already contain a primary key (user added rows). This works great 95% of the time... The problem is when the user somehow gives focus at some point (any point) to that bottom row on the datagridview, below their added rows, that is used to add new rows. The update command gives me an error stating that it cannot insert null into the primary key field, even though when checking all the values in every row, it is definitely NOT null for any of them. I have tried to trap for row.isNewRow (as the field never shows null) and deleting that row, but I get an error stating I can not delete an uncommitted row. If the focus is never given to that empty row beneath the existing rows and user added rows, the update works fine. What is going on?!

    Read the article

  • WatiN NativeElement.GetElementBounds() - What is the unit of measurement?

    - by Brian Schroer
    When I'm testing with WatiN, I like to save screenshots. Sometimes I don't really need a picture of the whole browser window though - I just want a picture of the element that I'm testing. My attempt to save a picture of an element with the code below resulted in a picture of a block box, because elementBounds.Top points to a pixel position way past the bottom of the screen. The elementBounds.Width and .Height values also appear to be about half what they should be. Is this a WatiN bug, or are these properties in a different unit of measure that I have to convert to pixels somehow? public static void SaveElementScreenshot (WatiN.Core.IE ie, WatiN.Core.Element element, string screenshotPath) { ScrollIntoView(ie, element); ie.BringToFront(); var ieClass = (InternetExplorerClass) ie.InternetExplorer; Rectangle elementBounds = element.NativeElement.GetElementBounds(); int left = ieClass.Left + elementBounds.Left; int top = ieClass.Top + elementBounds.Top; int width = elementBounds.Width; int height = elementBounds.Height; using (var bitmap = new Bitmap(width, height)) { using (Graphics graphics = Graphics.FromImage(bitmap)) { graphics.CopyFromScreen (new Point(left, top), Point.Empty, new Size(width, height)); } bitmap.Save(screenshotPath, ImageFormat.Jpeg); } }

    Read the article

  • Doctrine2 ArrayCollection

    - by boosis
    Ok, I have a User entity as follows <?php class User { /** * @var integer * @Id * @Column(type="integer") * @GeneratedValue */ protected $id; /** * @var \Application\Entity\Url[] * @OneToMany(targetEntity="Url", mappedBy="user", cascade={"persist", "remove"}) */ protected $urls; public function __construct() { $this->urls = new \Doctrine\Common\Collections\ArrayCollection(); } public function addUrl($url) { // This is where I have a problem } } Now, what I want to do is check if the User has already the $url in the $urls ArrayCollection before persisting the $url. Now some of the examples I found says we should do something like if (!$this->getUrls()->contains($url)) { // add url } but this doesn't work as this compares the element values. As the $url doesn't have id value yet, this will always fail and $url will be dublicated. So I'd really appreciate if someone could explain how I can add an element to the ArrayCollection without persisting it and avoiding the duplication? Edit I have managed to achive this via $p = function ($key, $element) use ($url) { if ($element->getUrlHash() == $url->getUrlHash()) { return true; } else { return false; } }; But doesn't this still load all urls and then performs the check? I don't think this is efficient as there might be thousands of urls per user.

    Read the article

  • Extract wrong data from a frame in C?

    - by ipkiss
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data. Because in C, one byte can only holds one character and in the start field of the frame, it is 0x02 which means STX which is 3 characters. So, in order to test my program, On the sender side, I construct an array as the frame formatted above like: char frame[254]; frame[0] = 0x02; // starting field frame[1] = 0x41; // command field which is character 'A' ..so on.. And, then On the receiver side, I take out the fields like: char result[254]; // read data read(result); printf("command = %c", result[1]); // get the command field of the frame // get other field's values the command field value (result[1]) is not character 'A'. I think, this because the first field value of the frame is 0x02 (STX) occupying 3 first places in the array frame and leading to the wrong results on the receiver side. How can I correct the issue or am I doing something wrong at the sender side? Thanks all. related questions: http://stackoverflow.com/questions/2500567/parse-and-read-data-frame-in-c http://stackoverflow.com/questions/2531779/clear-data-at-serial-port-in-linux-in-c

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • SQLiteDataAdapter Fill exception C# ADO.NET

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here?

    Read the article

  • iPhone contacts app styled indexed table view implementation

    - by KSH
    My Requirement: I have this straight forward requirement of listing names of people in alphabetical order in a Indexed table view with index titles being the starting letter of alphabets (additionally a search icon at the top and # to display misc values which start with a number and other special characters). What I have done so far: 1. I am using core data for storage and "last_name" is modelled as a String property in the Contacts entity 2.I am using a NSFetchedResultsController to display the sorted indexed table view. Issues accomplishing my requirement: 1. First up, I couldn't get the section index titles to be the first letter of alphabets. Dave's suggestion in the following post, helped me achieve the same: http://stackoverflow.com/questions/1112521/nsfetchedresultscontroller-with-sections-created-by-first-letter-of-a-string The only issue I encountered with Dave' suggestion is that I couldn't get the misc named grouped under "#" index. What I have tried: 1. I tried adding a custom compare method to NSString (category) to check how the comparison and section is made but that custom method doesn't get called when specified in the NSSortDescriptor selector. Here is some code: `@interface NSString (SortString) -(NSComparisonResult) customCompare: (NSString*) aStirng; @end @implementation NSString (SortString) -(NSComparisonResult) customCompare:(NSString *)aString { NSLog(@"Custom compare called to compare : %@ and %@",self,aString); return [self caseInsensitiveCompare:aString]; } @end` Code to fetch data: `NSArray *sortDescriptors = [NSArray arrayWithObject:[[[NSSortDescriptor alloc] initWithKey:@"last_name" ascending:YES selector:@selector(customCompare:)] autorelease]]; [fetchRequest setSortDescriptors:sortDescriptors]; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"lastNameInitial" cacheName:@"MyCache"];` Can you let me know what I am missing and how the requirement can be accomplished ?

    Read the article

  • I set up better-edit-in-place but still cannot edit in place in Rails

    - by Angela
    I installed the plugin better-edit-in-place (http://github.com/nakajima/better-edit-in-place) but I dont' seem to be able to make it work. When I use firebug, it is rendering the value to be edited correctly: <span rel="/emails/1" id="email_1_days" class="editable">7</span> And it is showing the full javascript which should work on class editable: var Editable = Class.create({ 5 initialize: function(element, options) { 6 this.element = $(element); 7 Object.extend(this, options); 8 9 // Set default values for options 10 this.editField = this.editField || {}; 11 this.editField.type = this.editField.type || 'input'; 12 this.onLoading = this.onLoading || Prototype.emptyFunction; 13 this.onComplete = this.onComplete || Prototype.emptyFunction; 14 15 this.field = this.parseField(); 16 this.value = this.element.innerHTML; 17 18 this.setupForm(); 19 this.setupBehaviors(); 20 }, 21 22 // In order to parse the field correctly, it's necessary that the element 23 // you want to edit in place for have an id of (model_name)_(id)_(field_name). 24 // For example, if you want to edit the "caption" field in a "Photo" model, 25 // your id should be something like "photo_#{@photo.id}_caption". 26 // If you want to edit the "comment_body" field in a "MemberBlogPost" model, 27 // it would be: "member_blog_post_#{@member_blog_post.id}_comment_body" 28 parseField: function() { 29 var matches = this.element.id.match(/(.*)_\d*_(.*)/); 30 this.modelName = matches[1]; 31 this.fieldName = matches[2]; 32 if (this.editField.foreignKey) this.fieldName += '_id'; 33 return this.modelName + '[' + this.fieldName + ']'; 34 }, But when I point my mouse at the element, no in-place-editing action!

    Read the article

  • JSF ISO-8859-2 charset

    - by Vladimir
    Hi! I have problem with setting proper charset on my jsf pages. I use MySql db with latin2 (ISO-8859-2 charset) and latin2_croatian_ci collation. But, I have problems with setting values on backing managed bean properties. Page directive on top of my page is: <%@ page language="java" pageEncoding="ISO-8859-2" contentType="text/html; charset=ISO-8859-2" %> In head I included: <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-2"> And my form tag is: <h:form id="entityDetails" acceptcharset="ISO-8859-2"> I've created and registered Filter in web.xml with following doFilter method implementation: public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { request.setCharacterEncoding("ISO-8859-2"); response.setCharacterEncoding("ISO-8859-2"); chain.doFilter(request, response); } But, i.e. when I set managed bean property through inputText, all special (unicode) characters are replaced with '?' character. I really don't have any other ideas how to set charset to pages to perform well. Any suggestions? Thanks in advance.

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • Persisting complex data between postbacks in ASP.NET MVC

    - by Robert Wagner
    I'm developing an ASP.NET MVC 2 application that connects to some services to do data retrieval and update. The services require that I provide the original entity along with the updated entity when updating data. This is so it can do change tracking and optimistic concurrency. The services cannot be changed. My problem is that I need to somehow store the original entity between postbacks. In WebForms, I would have used ViewState, but from what I have read, that is out for MVC. The original values do not have to be tamper proof as the services treat them as untrusted. The entities would be (max) 1k and it is an intranet app. The options I have come up are: Session - Ruled out - Store the entity in the Session, but I don't like this idea as there are no plans to share session between URL - Ruled out - Data is too big HiddenField - Store the serialized entity in a hidden field, perhaps with encryption/encoding HiddenVersion - The entities have a (SQL) version field on them, which I could put into a hidden field. Then on a save I get "original" entity from the services and compare the versions, doing my own optimistic concurrency. Cookies - Like 3 or 4, but using a cookie instead of a hidden field I'm leaning towards option 4, although 3 would be simpler. Are these valid options or am I going down the wrong track? Is there a better way of doing this?

    Read the article

  • Multiselect Form Field in PDF

    - by Jason R. Coombs
    Using PDF, is it possible to create a single form element with multiple fields of which several can be selected? For example, in HTML, one can create a set of checkboxes associated with the same field name: <div>Select one for Member of the School Board</div> <input type="checkbox" name="field(school)" value="vote1"> <span class="label">Libby T. Garvey</span><br/> <input type="checkbox" name="field(school)" value="vote2"> <span class="label">Emma N. Violand-Sanchez</span><br/> In this case, the field name is "field(school)", and when the form is submitted, "field(school)" can be supplied 0, 1, or 2 times. Is there an equivalent construct in PDF where a single field can have multiple values. So far in my investigation, it appears that if fields are assigned the same name, it is only possible to select one field. If it is possible to implement this in PDF, what is this construct called and how can it be implemented? Edit: To clarify, I am aware that a PDF can contain multiple form fields with different field names, and those can be selected independently, but then the grouping is implicit and not explicit as with the HTML form. I would like to use a construct that makes the grouping of options explicit, and preferably allows for restrictions (e.g. at least one required, no more than 2 allowed, etc).

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Idiomatic Scala way to deal with base vs derived class field names?

    - by Gregor Scheidt
    Consider the following base and derived classes in Scala: abstract class Base( val x : String ) final class Derived( x : String ) extends Base( "Base's " + x ) { override def toString = x } Here, the identifier 'x' of the Derived class parameter overrides the field of the Base class, so invoking toString like this: println( new Derived( "string" ).toString ) returns the Derived value and gives the result "string". So a reference to the 'x' parameter prompts the compiler to automatically generate a field on Derived, which is served up in the call to toString. This is very convenient usually, but leads to a replication of the field (I'm now storing the field on both Base and Derived), which may be undesirable. To avoid this replication, I can rename the Derived class parameter from 'x' to something else, like '_x': abstract class Base( val x : String ) final class Derived( _x : String ) extends Base( "Base's " + _x ) { override def toString = x } Now a call to toString returns "Base's string", which is what I want. Unfortunately, the code now looks somewhat ugly, and using named parameters to initialize the class also becomes less elegant: new Derived( _x = "string" ) There is also a risk of forgetting to give the derived classes' initialization parameters different names and inadvertently referring to the wrong field (undesirable since the Base class might actually hold a different value). Is there a better way? Edit: To clarify, I really only want the Base values; the Derived ones just seem necessary for initializing the Base ones. The example only references them to illustrate the ensuing issues. It might be nice to have a way to suppress automatic field generation if the derived class would otherwise end up hiding a base class field.

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >