Search Results

Search found 7617 results on 305 pages for 'e m fields'.

Page 231/305 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • How to program a neural network for chess?

    - by marco92w
    Hello! I want to program a chess engine which learns to make good moves and win against other players. I've already coded a representation of the chess board and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player. My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1. I think that configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good. So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!

    Read the article

  • Why is Selenium RC so slow?

    - by Pete
    Hi. For some time I have been investigating Selenium RC in order to do functional testing of my web application. I have now found a test strategy that is so effective, that I do not want to move away from Selenium RC (after spending weeks trying to figure out a good way to validate ASP.NET validation controls). But now that my Selenium RC adventure is moving from a POC to be something that I actually use, I'm running into a problem. It is insanely slow. Executing a single test that loads a page, fills in some fields, and clicks a button takes in the magnitude of seconds to execute. When it is executing, I can easily see each individual field being filled out one at a time. Using Selenium IDE in Firefox is not that slow. I found this page, that clearly specifies that Selenium RC is slow http://selenium-grid.seleniumhq.org/how_it_works.html But why is that? Is it because the browser is polling the selenium server? If so, can this polling interval not be modified? Or is there another reason. I am not accustomed to a remote call taking a humanly noticable amount of time to execute. It is horrible that executing a few tests should take so long. I can execute my entire presentation (MVP), business, and database layer test suite (500+ tests) way quicker than it takes to run 10 tests for a single web page.

    Read the article

  • Ext.data.JsonStore + Ext.DataView = not loading records

    - by Mulone
    Hi guys, I'm trying to make a DataView work (on Ext JS 2.3). Here is the jsonStore, which seems to be working (it calls the server and gets a valid response). Ext.onReady(function(){ var prefStore = new Ext.data.JsonStore({ autoLoad: true, //autoload the data url: 'getHighestUserPreferences', baseParams:{ userId: 'andreab', max: '50' }, root: 'preferences', fields: [ {name:'prefId', type: 'int'}, {name:'absInteractionScore', type:'float'} ] }); Then the xtemplate: var tpl = new Ext.XTemplate( '<tpl for=".">', '<div class="thumb-wrap" id="{name}">', '<div class="thumb"><img src="{url}" title="{name}"></div>', '<span class="x-editable">{shortName}</span></div>', '</tpl>', '<div class="x-clear"></div>' ); The panel: var panel = new Ext.Panel({ id:'geoPreferencesView', frame:true, width:600, autoHeight:true, collapsible:false, layout:'fit', title:'Geo Preferences', And the DataView items: new Ext.DataView({ store: prefStore, tpl: tpl, autoHeight:true, multiSelect: true, overClass:'x-view-over', itemSelector:'div.thumb-wrap', emptyText: 'No images to display' }) }); panel.render('extOutput'); }); What I get in the page is a blue frame with the title, but nothing in it. How can I debug this and see why it is not working? Cheers, Mulone

    Read the article

  • Filter Dynamically Calendar View SharePoint

    - by lerac
    Hello world I'm using SP wss 3.0 wondering if anybody knows how to filter a calendar view in SharePoint dynamically based upon the current user. It's not the simple question of using [ME] because this will not work with single line text fields. Also users do not add the items, they are being imported. So filtering on the basis of Created by, Modified by, Assigned to, or People picking data is not a option. Already managed to get to the current user in a jScript variable, now I would like to filter it in the calendar view. Been searching for a long time now, but can't seem to find anything. Altough filtering with a all items view is possible, yet I can't seem to find a way to dynamically filter a calendar view. Already tried modifing allitems.aspx view page in Designer Fooling around with CAML jScript (as far as I know) Calculated views Google (find more other cool things then a solution) Ofcourse I don't expect a solution on a silver platter (ofcourse would be nice ;) ), but if somebody can point me into a direction I already would be quite happy.

    Read the article

  • Cannot rollback ransaction with Entity Framework

    - by Luca
    I have to do queries on uncommitted changes and I tried to use transactions, but I found that it do not work if there are exceptions. I made a simple example to reproduce the problem. I have a database with only one table called "Tabella" and the table has two fields: "ID" is a autogenerated integer, and "Valore" is an integer with a Unique constraint. Then I try to run this code: using (TransactionScope scope = new TransactionScope()) { Db1Container db1 = new Db1Container(); try { db1.AddToTabella(new Tabella() { Valore = 1 }); db1.SaveChanges(); } catch { } try { db1.AddToTabella(new Tabella() { Valore = 1 }); db1.SaveChanges(); //Unique constraint is violated here and an exception is thrown } catch { } try { db1.AddToTabella(new Tabella() { Valore = 2 }); db1.SaveChanges(); } catch { } //scope.Complete(); //NEVER called } //here everything should be rolled back Now if I look into the database it should contain no records because the transaction should rollback, instead I find two records!!!! One with Valore=1 and one with Valore=2. I am missing something? It looks like the second call to SaveChanges method rollback its own changes and "deletes" the transaction, then the third call to SaveChanges commits the changes of the first and the third insert (at this point it is like the transaction not exists). I also tried to use SaveChanges(false) method (even without calling AcceptAllChanges method), but with no success: I have the same behaviour. I do not want the transaction to be rolled back automatically by SaveChanges, because I want to correct the errors (for example by user interaction in the catch statement) and make a retry. Can someone help me with this? It seems like a "bug", and it is giving me a really big headache...

    Read the article

  • How to speed up WPF programs?

    - by Sam
    I love programming with and for Windows Presentation Framework. Mostly I write browser-like apps using WPF and XAML. But what really annoys me is the slowness of WPF. A simple page with only a few controls loads fast enough, but as soon as a page is a teeny weeny bit more complex, like containing a lot of data entry fields, one or two tab controls, and stuff, it gets painful. Loading of such a page can take more than one second. Seconds, indeed, especially on not so fast computers (read: the customers computers) it can take ages. Same with changing values on the page. Everything about the WPF UI is somehow sluggy. This is so mean! They give me this beautiful framework, but make it so excruciatingly slow so I'll have to apologize to our customers all the time! My Question: How do you speed up WPF? How do you profile bottlenecks? How do you deal with the slowness? Since this seems to be an universal problem with WPF, I'm looking for general advice, useful for many situations and problems. Some other related questions: What tools do you use for WPF development Tools to develop WPF or Silverlight applications

    Read the article

  • Editing a UITextField inside a UITableViewCell fails

    - by Stephen Darlington
    In my application I have a UITextField inside a UITableViewCell. If I click inside the text field and add some text I find that if try to move the insertion point it works the first time but fails on subsequent attempts. I am completely unable to move the selection; no "magnifying glass" appears. Even more curious, this "setting" seems to be permanent until I restart the application. And it affects all UITextFields on that screen and not just the one that I originally tried to edit. If you want to see it yourself, try the "UICatalog" sample that comes with the iPhone SDK. Click "text fields" and then "edit" and play around with the text boxes. I've done a lot of digging on this but it's pretty hard to Google for! The best references I've found are on Apple's support board and MacRumors formum (both reference a solution that apparently used to work on iPhone 2.0 but does work not with contemporary versions -- I did try). My feeling that is that this is a bug in the OS, but I thought I'd throw this out to the SO crowd for a second opinion and to see if there are any workarounds. Any ideas? Following benzado's suggestion, I tried building my application using the 2.0, 2.1 and 2.2 SDKs. I got the same behaviour in all versions. (Actually, something related but not the same broke in 2.2 but that's probably another question!)

    Read the article

  • Complex Types, ModelBinders and Interfaces

    - by Kieron
    Hi, I've a scenario where I need to bind to an interface - in order to create the correct type, I've got a custom model binder that knows how to create the correct concrete type (which can differ). However, the type created never has the fields correctly filled in. I know I'm missing something blindingly simple here, but can anyone tell me why or at least what I need to do for the model binder to carry on it's work and bind the properties? public class ProductModelBinder : DefaultModelBinder { override public object BindModel (ControllerContext controllerContext, ModelBindingContext bindingContext) { if (bindingContext.ModelType == typeof (IProduct)) { var content = GetProduct (bindingContext); return content; } var result = base.BindModel (controllerContext, bindingContext); return result; } IProduct GetProduct (ModelBindingContext bindingContext) { var idProvider = bindingContext.ValueProvider.GetValue ("Id"); var id = (Guid)idProvider.ConvertTo (typeof (Guid)); var repository = RepositoryFactory.GetRepository<IProductRepository> (); var product = repository.Get (id); return product; } } The Model in my case is a complex type that has an IProduct property, and it's those values I need filled in. Model: [ProductBinder] public class Edit : IProductModel { public Guid Id { get; set; } public byte[] Version { get; set; } public IProduct Product { get; set; } }

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • delphi Ado (mdb) update records

    - by ml
    I´m trying to copy data from one master table and 2 more child tables when i select one record in the master table i copy all the fields from that table for the other (table1 copy from ADOQuery the selected record) procedure TForm1.copyButton7Click(Sender: TObject); SQL.Clear; SQL.Add('SELECT * from ADOQuery'); SQL.Add('Where numeracao LIKE ''%'+NInterv.text);// locate record selected in Table1 NInterv.text) Open; // initiate copy of records begin while not tableADoquery.Eof do begin Table1.Last; Table1.Append;// how to append if necessary! Table1.Edit; Table1.FieldByName('C').Value := ADoquery.FieldByName('C').Value; Table1.FieldByName('client').Value := ADoquery.FieldByName('client').Value; Table1.FieldByName('Cnpj_cpf').Value := ADoquery.FieldByName('Cnpj_cpf').Value; table1.Post; table2.next;/// end; end; //How can i update the TableChield, TableChield1 field´s at the same time? do the same for the child tables TableChield <= TableChield_1 TableChield1 <= TableChield_2 thanks

    Read the article

  • HTML text input and using the input as a variable in a script(tcl)/sql(sqlite)

    - by Fantastic Fourier
    Hello all, I'm very VERY new at this whole web thing. And I'm just very confused in general. Basically, what I want to do is take an input via text using HTML and adding that input to database, table trans. Should be simple but I am lost. <li>Transaction Number</li> <li><input type=|text| name=|tnumber| </li> // do i need to use value? <li>Employee Name</li> <li><input type=|text| name=|ename| </li> <li><input type=|SUBMIT| value=|Add|></li> ...... ...... sqlite3 db $::env(ROOT)/database.db mb eval {INSERT INTO trans VALUES ($tnumber, $ename} mb close They are both in a same file and there are only two fields to the database to keep things simple. What I can see here is that tnumber and ename aren't declared as variables. So how do I do that so that the text input is assigned to respective variables?

    Read the article

  • Delphi: RTTI and TObjectList<TObject>

    - by conciliator
    Based on one answer to an earlier post, I'm investigating the possibility of the following design TChildClass = class(TObject) private FField1: string; FField2: string; end; TMyClass = class(TObject) private FField1: TChildClass; FField2: TObjectList<TChildClass>; end; Now, in the real world, TMyClass will have 10 different lists like this, so I would like to be able to address these lists using RTTI. However, I'm not interested in the other fields of this class, so I need to check if a certain field is some sort of TObjectList. This is what I've got so far: procedure InitializeClass(RContext: TRttiContext; AObject: TObject); var ROwnerType: TRttiType; RObjListType: TRttiType; RField: TRttiField; SchInf: TSchemaInfoDetail; begin ROwnerType := RContext.GetType(AObject.ClassInfo); RObjListType := RContext.GetType(TObjectList<TObject>); for RField in ROwnerType.GetFields do begin // How do I check if the type of TMyClass.FField2 (which is TObjectList<TChildClass>) is some sort of TObjectList? end; Clearly, RField.FieldType <> RObjListType.FieldType. However, they do have some relation, don't they? It seems horrible (and wrong!) to make a very elaborate check for common functionality in order to make it highly probable that RField.FieldType is in fact a TObjectList. To be honest, I am quite uncomfortable with generics, so the question might be very naïve. However, I'm more than happy to learn. Is the above solution possible to implement? TIA!

    Read the article

  • What's the "best" database for embedded?

    - by mawg
    I'm an embedded guy, not a database guy. I've been asked to redesign an existing system which has bottlenecks in several places. The embedded device is based around an ARM 9 processor running at 220mHz. There should be a database of 50k entries (may increase to 250k) each with 1k of data (max 8 filed). That's approximate - I can try to get more precise figures if necessary. They are currently using SqlLite 2 and planning to move to SqlLite 3. Without starting a flame war - I am a complete d/b newbie just seeking advice - is that the "best" decision? I realize that this might be a "how long is a piece of string?" question, but any pointers woudl be greatly welcomed. I don't mind doing a lot of reading & research, but just hoped that you could get me off to a flying start. Thanks. p.s Again, a total rewrite, might not even stick with embedded Linux, but switch to eCos, don't worry too much about one time conversion between d/b formats. Oh, and accesses should be infrequent, at most one every few seconds. edit: ok, it seems they have 30k entries (may reach 100k or more) of only 5 or 6 fields each, but at least 3 of them can be a search key for a record. They are toying with "having no d/b at all, since the data are so simple", but it seems to me that with multiple keys, we couldn't use fancy stuff like a quicksort() type search (recursive, binary search). Any thoughts on "no d/b", just data-structures? Btw, one key is 800k - not sure how well SqlLite handles that (maybe with "no d/b" I have to hash that 800k to something smaller?)

    Read the article

  • Entity Framework: insert with one-to-one reference

    - by bomortensen
    Hi 'overflow! I'm having a bit trouble inserting into a mssql database using Entity Framework. There's two tables that I want to insert into, where one of table 1s fields is a foreign key in table2. This is the code I have so far: Media media = null; foreach(POI p in poiList) { media = new Media() { Path = p.ImagePath, Title = p.Title }; if (media != null && !context.Media.Any(me => me.Title == p.ImageTitle)) { context.AddToMedia(media); context.SaveChanges(); } PointOfInterest poi = new PointOfInterest() { Altitude = 2000.0, ID = p.ID, Latitude = p.Latitude, Longitude = p.Longitude, LatitudeRoute = p.LatitudeRoute, LongitudeRoute = p.LongitudeRoute, Description = p.Description, Title = p.Title, DefaultImageID = media.ID, }; context.AddToPointOfInterest(poi); } context.SaveChanges(); The following gives me this error: An object with the same key already exists in the ObjectStateManagerAn object with the same key already exists in the ObjectStateManager I'm still learning how to use the entity framework, so I don't even know if this would be the right approach to insert into two referenced tables. Can anyone enlighten me on this? :) Any help would be greatly appreciated! Thanks!

    Read the article

  • Sending data through POST request from a node.js server to a node.js server

    - by Masiar
    I'm trying to send data through a POST request from a node.js server to another node.js server. What I do in the "client" node.js is the following: var options = { host: 'my.url', port: 80, path: '/login', method: 'POST' }; var req = http.request(options, function(res){ console.log('status: ' + res.statusCode); console.log('headers: ' + JSON.stringify(res.headers)); res.setEncoding('utf8'); res.on('data', function(chunk){ console.log("body: " + chunk); }); }); req.on('error', function(e) { console.log('problem with request: ' + e.message); }); // write data to request body req.write('data\n'); req.write('data\n'); req.end(); This chunk is taken more or less from the node.js website so it should be correct. The only thing I don't see is how to include username and password in the options variable to actually login. This is how I deal with the data in the server node.js (I use express): app.post('/login', function(req, res){ var user = {}; user.username = req.body.username; user.password = req.body.password; ... }); How can I add those username and password fields to the options variable to have it logged in? Thanks

    Read the article

  • ASP.NET MVC - using model property as form, how can I post to action?

    - by Ryan Peters
    Consider the following model: public class BandProfileModel { public BandModel Band { get; set; } public IEnumerable<Relationship> Requests { get; set; } } and the following form: <% using (Html.BeginForm()) { %> <%: Html.EditorFor(m => m.Band) %> <input type="submit" value="Save Band" /> <% } %> which posts to the following action: public ActionResult EditPost(BandProfileModel m, string band) { // stuff is done here, but m is null? return View(m); } Basically, I only have one property on my model that is used in the form. The other property in BandProfleModel is just used in the UI for other data. I'm trying to update just the Band property, but for each post, the argument "m" is always null (specifically, the .Band property is null). It's posting just fine to the action, so it isn't a problem with my route. Just the data is null. The ID and name attributes of the fields are BAND_whatever and Band.whatever (whatever being a property of Band), so it seems like it would work... What am I doing wrong? How can I use just one property as part of a form, post back, and have values populated via the model binder for my BandProfileModel property in the action? Thanks.

    Read the article

  • Using ContentProviderOperation to update and insert contacts

    - by Bogus
    Hello, I faced the problem updating/insertng contacts on Android 2.0+. There is no problem to insert a new contact when phone book is empty but when I did it 2nd time some fileds like TEL, EMAIL are doubled and tripped etc. but N, FN, ORG are ok (one copy). After getting and advice of other member this forum I updated a contact first and then ContentProviderResult[] returned uri's with null then I do an insert action and it went ok but after that I made an update and all contacts are aggregated into one - i got 1 contact insted 3 which existed in phone book. This one was damaged, the contact fields are randomly built. I set Google account. Code: ArrayList<ContentProviderOperation> ops = new ArrayList<ContentProviderOperation>(); ops.add(ContentProviderOperation.newUpdate(ContactsContract.RawContacts.CONTENT_URI) .withValue(RawContacts.AGGREGATION_MODE, RawContacts.AGGREGATION_MODE_DISABLED) .withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, accountType) .withValue(ContactsContract.RawContacts.ACCOUNT_NAME, accountName) .build()); // add name ContentProviderOperation.Builder builder = ContentProviderOperation.newUpdate(ContactsContract.Data.CONTENT_URI); builder.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, 0); builder.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.StructuredName.CONTENT_ITEM_TYPE); builder.withValue(ContactsContract.CommonDataKinds.StructuredName.PHONETIC_FAMILY_NAME, name); // phones ContentProviderOperation.Builder builder = ContentProviderOperation.newUpdate(ContactsContract.Data.CONTENT_URI); builder.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, 0); builder.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE); builder.withValue(ContactsContract.CommonDataKinds.Phone.NUMBER, phoneValue); builder.withValue(ContactsContract.CommonDataKinds.Phone.TYPE, phoneType); builder.withValue(ContactsContract.CommonDataKinds.Phone.LABEL, phoneLabel); ops.add(builder.build()); // emails ... // orgs ... try { ContentProviderResult[] result = mContentResolver.applyBatch(ContactsContract.AUTHORITY, ops); } } catch (Exception e) { Log.e(LOG_TAG, "Exception while contact updating: " + e.getMessage()); } What is wrong in this solution ? How does work aggregation engine ? I will be glad for help. Bogus

    Read the article

  • part of contact are repeated after each writing the same contact (Android 2.0+)

    - by Bogus
    Hello, I met this problem at writing contacts by API for Android 2.0 or greater. Each time I write the same contact which already exist in my account (Google account) I got some part of contact aggregated ok but other did not. For example fields like FN, N, ORG, TITLE always are in one copy but TEL, EMAIL, ADR are added extra so after 2nd writing the same contact I have 2 copy the same TEL or EMAIL. How to force API engine to not repeate existed data ? Code: ArrayList ops = new ArrayList(); ops.add(ContentProviderOperation.newInsert(ContactsContract.RawContacts.CONTENT_URI) .withValue(ContactsContract.RawContacts.ACCOUNT_TYPE, accountType) .withValue(ContactsContract.RawContacts.ACCOUNT_NAME, accountName) .build()); ... // adding phone number ContentProviderOperation.Builder builder = ContentProviderOperation.newInsert(ContactsContract.Data.CONTENT_URI); builder.withValueBackReference(ContactsContract.Data.RAW_CONTACT_ID, 0); builder.withValue(ContactsContract.Data.MIMETYPE, ContactsContract.CommonDataKinds.Phone.CONTENT_ITEM_TYPE); builder.withValue(ContactsContract.CommonDataKinds.Phone.NUMBER, phoneValue); builder.withValue(ContactsContract.CommonDataKinds.Phone.TYPE, phoneType); // work/home builder.withValue(ContactsContract.CommonDataKinds.Phone.LABEL, phoneLabel); ops.add(builder.build()); ... try { contentResolver.applyBatch(ContactsContract.AUTHORITY, ops); } catch (Exception e) { // } I tried add: AGGREGATION_MODE on AGGREGATION_MODE_DISABLED. but it changed nothing. I will glad for any hint in this case. BR, Bogus

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Can I autogenerate/compile code on-the-fly, at runtime, based upon values (like key/value pairs) parsed out of a configuration file?

    - by Kumba
    This might be a doozy for some. I'm not sure if it's even 100% implementable, but I wanted to throw the idea out there to see if I'm really off of my rocker yet. I have a set of classes that mimics enums (see my other questions for specific details/examples). For 90% of my project, I can compile everything in at design time. But the remaining 10% is going to need to be editable w/o re-compiling the project in VS 2010. This remaining 10% will be based on a templated version of my Enums class, but will generate code at runtime, based upon data values sourced in from external configuration files. To keep this question small, see this SO question for an idea of what my Enums class looks like. The templated fields, per that question, will be the MaxEnums Int32, Names String() array, and Values array, plus each shared implementation of the Enums sub-class (which themselves, represent the Enums that I use elsewhere in my code). I'd ideally like to parse values from a simple text file (INI-style) of key/value pairs: [Section1] Enum1=enum_one Enum2=enum_two Enum3=enum_three So that the following code would be generated (and compiled) at runtime (comments/supporting code stripped to reduce question size): Friend Shared ReadOnly MaxEnums As Int32 = 3 Private Shared ReadOnly _Names As String() = New String() _ {"enum_one", "enum_two", "enum_three"} Friend Shared ReadOnly Enum1 As New Enums(_Names(0), 1) Friend Shared ReadOnly Enum2 As New Enums(_Names(1), 2) Friend Shared ReadOnly Enum3 As New Enums(_Names(2), 4) Friend Shared ReadOnly Values As Enums() = New Enums() _ {Enum1, Enum2, Enum3} I'm certain this would need to be generated in MSIL code, and I know from reading that the two components to look at are CodeDom and Reflection.Emit, but I was wondering if anyone had working examples (or pointers to working examples) versus really long articles. I'm a hands-on learner, so I have to have example code to play with. Thanks!

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • YUI DataTable with JSON and client side filtering Data Error

    - by user316574
    Hi, I don't understand what I'm doing wrong here ! I keep getting a Data Error. But I validated the JSON and it's ok... Here is the javascript from the YUI Datatble example (slightly modified). <div class="markup"> <label for="filter">Filter by state:</label> <input type="text" id="filter" value=""> <div id="tbl"></div> -- YAHOO.util.Event.addListener(window, "load", function() { //var Ex = YAHOO.namespace('example'); var dataSource = new YAHOO.util.DataSource("jsondb/json_meta_proxy.html",{ responseType : YAHOO.util.DataSource.TYPE_JSON, responseSchema : { resultsList: "records", fields: [ {key:"idprojet"}, {key:"nomprojet"} ], metaFields: { totalRecords: "totalRecords" } }, doBeforeCallback : function (req,raw,res,cb) { // This is the filter function var data = res.results || [], filtered = [], i,l; if (req) { req = req.toLowerCase(); for (i = 0, l = data.length; i and here is the JSON data in the file "jsondb/json_meta_proxy.html" { "recordsReturned": 1, "totalRecords": 1, "startIndex": 0, "sort": "idprojet", "dir": "asc", "records": [ { "idprojet": "11256", "nomprojet": "" } ] } Many thanks for your help !!!

    Read the article

  • SubSonic 3 ignoring columns in Select()

    - by jessegavin
    I have a table like so.. CREATE TABLE [dbo].[Locations_Hours]( [LocationID] [int] NOT NULL, [sun_open] [nvarchar](10) NULL, [sun_close] [nvarchar](10) NULL, [mon_open] [nvarchar](10) NULL, [mon_close] [nvarchar](10) NULL, [tue_open] [nvarchar](10) NULL, [tue_close] [nvarchar](10) NULL, [wed_open] [nvarchar](10) NULL, [wed_close] [nvarchar](10) NULL, [thu_open] [nvarchar](10) NULL, [thu_close] [nvarchar](10) NULL, [fri_open] [nvarchar](10) NULL, [fri_close] [nvarchar](10) NULL, [sat_open] [nvarchar](10) NULL, [sat_close] [nvarchar](10) NULL, [StoreNumber] [int] NULL, [LocationHourID] [int] IDENTITY(1,1) NOT NULL, CONSTRAINT [PK_Locations_Hours] PRIMARY KEY CLUSTERED ( [LocationHourID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] And SubSonic 3 is generating a class with the following properties int LocationID string monopen string monclose string tueopen string tueclose string wedopen string wedclose string thuopen string thuclose string friopen string friclose string satopen string satclose string sunopen string sunclose int? StoreNumber int LocationHourID When I try to perform a query against this class like so.. var result = DB.LocationHours.Where(o => o.LocationID == _locationId); This is the resulting SQL query that SubSonic generates. SELECT [t0].[LocationHourID], [t0].[LocationID], [t0].[StoreNumber] FROM [dbo].[Locations_Hours] AS t0 WHERE ([t0].[LocationID] = 4019) I cannot figure out why SubSonic is omitting the nvarchar fields when it generates the SELECT statement. Anyone got any ideas?

    Read the article

  • Distinguishing between .NET exception types

    - by Swingline Rage
    For the love of all things holy, how do you distinguish between different "exception flavors" within the predefined .NET exception classes? For example, a piece of code might throw an XmlException under the following conditions: The root element of the document is NULL Invalid chars are in the document The document is too long All of these are thrown as XmlException objects and all of the internal "tell me more about this exception" fields (such as Exception.HResult, Exception.Data, etc.) are usually empty or null. That leaves Exception.Message as the only thing that allows you to distinguish among these exception types, and you can't really depend on it because, you guessed it, the Exception.Message string is glocabilized, and can change when the culture changes. At least that's my read on the documentation. Exception.HResult and Exception.Data are widely ignored across the .NET libraries. They are the red-headed stepchildren of the world's .NET error-handling code. And even assuming they weren't, the HRESULT type is still the worst, downright nastiest error code in the history of error codes. Why we are still looking at HRESULTs in 2010 is beyond me. I mean if you're doing Interop or P/Invoke that's one thing but... HRESULTs have no place in System.Exception. HRESULTs are a wart on the proboscis of System.Exception. But seriously, it means I have to set up a lot of detailed specific error-handling code in order to figure out the same information that should have been passed as part of the exception data. Exceptions are useless if they force you to work like this. What am I doing wrong?

    Read the article

  • ProgrammingError when aggregating over an annotated & grouped Django ORM query

    - by ento
    I'm trying to construct a query to get the "average, maximum, minimum number of items purchased by a single user". The data source is this simple sales record table: class SalesRecord(models.Model): id = models.IntegerField(primary_key=True) user_id = models.IntegerField() product_code = models.CharField() price = models.IntegerField() created_at = models.DateTimeField() A new record is inserted into this table for every item purchased by a user. Here's my attempt at building the query: q = SalesRecord.objects.all() q = q.values('user_id').annotate( # group by user and count the # of records count=Count('id'), # (= # of items) ).order_by() result = q.aggregate(Max('count'), Min('count'), Avg('count')) When I try to execute the code, a ProgrammingError is raised at the last line: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FROM (SELECT sales_records.user_id AS user_id, COUNT(sales_records.`' at line 1") Django's error screen shows that the SQL is SELECT FROM (SELECT `sales_records`.`player_id` AS `player_id`, COUNT(`sales_records`.`id`) AS `count` FROM `sales_records` WHERE (`sales_records`.`created_at` >= %s AND `sales_records`.`created_at` <= %s ) GROUP BY `sales_records`.`player_id` ORDER BY NULL) subquery It's not selecting anything! Can someone please show me the right way to do this? Hacking Django I've found that clearing the cache of selected fields in django.db.models.sql.BaseQuery.get_aggregation() seems to solve the problem. Though I'm not really sure this is a fix or a workaround. @@ -327,10 +327,13 @@ # Remove any aggregates marked for reduction from the subquery # and move them to the outer AggregateQuery. + self._aggregate_select_cache = None + self.aggregate_select_mask = None for alias, aggregate in self.aggregate_select.items(): if aggregate.is_summary: query.aggregate_select[alias] = aggregate - del obj.aggregate_select[alias] + if alias in obj.aggregate_select: + del obj.aggregate_select[alias] ... yields result: {'count__max': 267, 'count__avg': 26.2563, 'count__min': 1}

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >