Search Results

Search found 1806 results on 73 pages for 'lazy evaluation'.

Page 61/73 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Silverlight 3 data-binding child property doesn't update

    - by sonofpirate
    I have a Silverlight control that has my root ViewModel object as it's data source. The ViewModel exposes a list of Cards as well as a SelectedCard property which is bound to a drop-down list at the top of the view. I then have a form of sorts at the bottom that displays the properties of the SelectedCard. My XAML appears as (reduced for simplicity): <StackPanel Orientation="Vertical"> <ComboBox DisplayMemberPath="Name" ItemsSource="{Binding Path=Cards}" SelectedItem="{Binding Path=SelectedCard, Mode=TwoWay}" /> <TextBlock Text="{Binding Path=SelectedCard.Name}" /> <ListBox DisplayMemberPath="Name" ItemsSource="{Binding Path=SelectedCard.PendingTransactions}" /> </StackPanel> I would expect the TextBlock and ListBox to update whenever I select a new item in the ComboBox, but this is not the case. I'm sure it has to do with the fact that the TextBlock and ListBox are actually bound to properties of the SelectedCard so it is listening for property change notifications for the properties on that object. But, I would have thought that data-binding would be smart enough to recognize that the parent object in the binding expression had changed and update the entire binding. It bears noting that the PendingTransactions property (bound to the ListBox) is lazy-loaded. So, the first time I select an item in the ComboBox, I do make the async call and load the list and the UI updates to display the information corresponding to the selected item. However, when I reselect an item, the UI doesn't change! For example, if my original list contains three cards, I select the first card by default. Data-binding does attempt to access the PendingTransactions property on that Card object and updates the ListBox correctly. If I select the second card in the list, the same thing happens and I get the list of PendingTransactions for that card displayed. But, if I select the first card again, nothing changes in my UI! Setting a breakpoint, I am able to confirm that the SelectedCard property is being updated correctly. How can I make this work???

    Read the article

  • problem with evolutionary algorithms degrading into simulated annealing: mutation too small?

    - by Schnalle
    i have a problem understanding evolutionary algorithms. i tried using this technique several times, but i always ran into the same problem: degeneration into simulated annealing. lets say my initial population, with fitness in brackets, is: A (7), B (9), C (14), D (19) after mating and mutation i have following children: AB (8.3), AC (12.2), AD (14.1), BC(11), BD (14.7), CD (17) after elimination of the weakest, we get A, AB, B, AC next turn, AB will mate again with a result around 8, pushing AC out. next turn, AB again, pushing B out (assuming mutation changes fitness mostly in the 1 range). now, after only a few turns the pool is populated with the originally fittest candidates (A, B) and mutations of those two (AB). this happens regardless of the size of the initial pool, it just takes a bit longer. say, with an initial population of 50 it takes 50 turns, then all others are eliminated, turning the whole setup in a more complicated simulated annealing. in the beginning i also mated canditates with themselves, worsening the problem. so, what do i miss? are my mutation rates simply too small and will it go away if i increase them? here's the project i'm using it for: http://stefan.schallerl.com/simuan-grid-grad/ yeah, the code is buggy and the interface sucks, but i'm too lazy to fix it right now - and be careful, it may lock up your browser. better use chrome, even thought firefox is not slower than chrome for once (probably the tracing for the image comparison pays off, yay!). if anyone is interested, the code can be found here. here i just dropped the ev-alg idea and went for simulated annealing. ps: i'm not even sure about simulated annealing - it is like evolutionary algorithms, just with a population size of one, right?

    Read the article

  • Does O2Micro Oz776 Smartcard reader support SLE5528 cards?

    - by Proton
    Well, the title seems indicating I'm a lazy guy but it's not the truth... I'm messing with this weird circumstance a whole day. My laptop is a Dell Latitude D630 which has a Oz776 (usb idVendor == 0x0b97, idProduct == 0x7772) smartcard reader, but I'm not sure if it is functioning well. It can successfully recognize my Gemplus GPK4000 smartcard and my SIM card, but not SLE5528. This is my pcscd log when insert the GPK4000: 06039271 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000100 - 000000 62 00 00 00 00 00 14 01 00 00 00967744 <- 000000 80 0A 00 00 00 00 14 00 00 00 3B 27 00 80 65 A2 0C 01 01 37 00000048 ATR: 3B 27 00 80 65 A2 0C 01 01 37 00000013 atrhandler.c:102:ATRDecodeAtr() Conv: 01, Y1: 02, K: 07 00000011 atrhandler.c:120:ATRDecodeAtr() TA1: FFFFFFFF, TB1: 00, TC1: FFFFFFFF, TD1:FFFFFFFF 00000011 atrhandler.c:248:ATRDecodeAtr() CurrentProtocol: 1, AvailableProtocols: 1 00000062 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000014 Card ATR: 3B 27 00 80 65 A2 0C 01 01 37 29016873 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 This is the log when insert a SLE5528 card: 99999999 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000048 - 000000 62 00 00 00 00 00 11 01 00 00 ** Then it chokes here, when I remove the card, the log continues ** 04741980 <- 000000 80 00 00 00 00 00 11 42 FE 00 00000044 commands.c:225:CmdPowerOn Card absent or mute 00000017 ifdhandler.c:1096:IFDHPowerICC() PowerUp failed 00000082 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000021 eventhandler.c:443:EHStatusHandlerThread() Error powering up card. 00402818 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 I found that SLE5528 is ISO7816 compatible, and it should have ATR, but it just chokes at the PowerUp. When inserted any PVC card with no chip or AT24C01 card, it would not choke but report immediate PowerUp failure. When I tried it on Windows(Windows 7, "runas other user", smartcard login), it chokes too while PVC cards and AT24C01 report immediate failure.

    Read the article

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Good guidelines for developing an ecommerce application

    - by kaciula
    I'm making an ecommerce application on Android and, as it is my first serious project, I'm trying to figure out beforehand the best approach to do this. The application talks to a web service (magento api, meaning soap or xml rpc unfortunately) and gets all the content on the phone (product categories, product details, user credentials etc.). I think it should do lazy loading or something like that. So, I was thinking to keep the user credentials in a custom Object which will be kept in a SharedPreferences so that every Activity cand easily access it. I'll use a couple of ListViews to show the content and AsyncTask to fetch the data needed. Should I keep all the data in memory in objects or should I use some sort of cache or a local database? Also, I'm planning to use a HashMap with SoftReferences to hold the bitmaps I am downloading. But wouldn't this eat a lot of memory? How cand all the activities have access to all these objects (ecommerce basket etc.)? I'm thinking of passing them using Intents but this doesn't seem right to me. Can SharedPreferences be used for a lot of objects and are there any concurrency issues? Any pointers would be really appreciated. What are some good guidelines? What classes should I look into? Do you know of any resources on the Internet for me to check out?

    Read the article

  • JSON feed to Java Object

    - by mnml
    Hi, I would like to know if there is a webpage/software that can "translate" a Json feed object to a Java object with attributes. For example : { 'firstName': 'John', 'lastName': 'Smith', 'address': { 'streetAddress': '21 2nd Street', 'city': 'New York' } } Would become: class Person { private String firstName; private String lastName; private Address address; public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public Address getAddress() { return address; } public void setFirstName(String firstName) { this.firstName = firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public void setAddress(Address address) { this.address = address; } public String toString() { return String.format("firstName: %s, lastName: %s, address: [%s]", firstName, lastName, address); } } class Address { private String streetAddress; private String city; public String getStreetAddress() { return streetAddress; } public String getCity() { return city; } public void setStreetAddress(String streetAddress) { this.streetAddress = streetAddress; } public void setCity(String city) { this.city = city; } public String toString() { return String.format("streetAddress: %s, city: %s", streetAddress, city); } } I'm not asking that because I'm lazy, but the JSON I would like to parse has quite a lot of attribute.

    Read the article

  • Import CSV to mysql

    - by 404error
    I have created a database and table. I have also created all the fields i will be needing. I have created 46 fields including 1 that is my ID for the row. The CSV doesn't contain the id field, nor does it contain the headers for the columns. I am new to all of this but have been trying to figure this out. I'm not on here being lazy asking for the answer, but looking for direction. I'm trying to figure out how to import the CSV but have it start importing data starting at the 2nd field, since I'm hoping the auto_increment will fill in the ID field, which is the first field i created. I tried these instructions with now luck. Can anyone offer some insight? your cvs file's column name must match your table column name browse your required .csv file select CSV using LOAD DATA options Check box 'ON' for Replace table data with file in Fields terminated by box type , in Fields enclosed by box " in Fields escaped by box \ in Lines terminated by box auto in Column names box type column name seperated by , like column1,column2,column3 10 check box ON for Use LOCAL keyword.

    Read the article

  • How can I marshal JSON to/from a POJO for BlackBerry Java?

    - by sowbug
    I'm writing a RIM BlackBerry client app. BlackBerry uses a simplified version of Java (no generics, no annotations, limited collections support, etc.; roughly a Java 1.3 dialect). My client will be speaking JSON to a server. We have a bunch of JAXB-generated POJOs, but they're heavily annotated, and they use various classes that aren't available on this platform (ArrayList, BigDecimal, XMLGregorianCalendar). We also have the XSD used by the JAXB-XJC compiler to generate those source files. Being the lazy programmer that I am, I'd really rather not manually translate the existing source files to Java 1.3-compatible JSON-marshalling classes. I already tried JAXB 1.0.6 xjc. Unfortunately, it doesn't understand the XSD file well enough to emit proper classes. Do you know of a tool that will take JAXB 2.0 XSD files and emit Java 1.3 classes? And do you know of a JSON marshalling library that works with old Java? I think I am doomed because JSON arrived around 2006, and Java 5 was released in late 2004, meaning that people probably wouldn't be writing JSON-parsing code for old versions of Java. However, it seems that there must be good JSON libraries for J2ME, which is why I'm holding out hope.

    Read the article

  • Entity Framework in layered architecture

    - by Kamyar
    I am using a layered architecture with the Entity Framework. Here's What I came up with till now (All the projects Except UI are class library): Entities: The POCO Entities. Completely persistence ignorant. No Reference to other projects. Generated by Microsoft's ADO.Net POCO Entity Generator. DAL: The EDMX (Entity Model) file with the context class. (t4 generated). References: Entities BLL: Business Logic Layer. Will implement repository pattern on this layer. References: Entities, DAL. This is where the objectcontext gets populated: var ctx=new DAL.MyDBEntities(); UI: The presentation layer: ASP.NET website. References: Entities, BLL + a connection string entry to entities in the config file (question #2). Now my three questions: Is my layer discintion approach correct? In my UI, I access BLL as follows: var customerRep = new BLL.CustomerRepository(); var Customer = customerRep.GetByID(myCustomerID); The problem is that I have to define the entities connection string in my UI's web.config/app.config otherwise I get a runtime exception. IS defining the entities connectionstring in UI spoils the layers' distinction? Or is it accesptible in a muli layered architecture. Should I take any additional steps to perform chage tracking, lazy loading, etc (by etc I mean the features that Entity Framework covers in a conventional, 1 project, non POCO code generation)? Thanks and apologies for the lengthy question.

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • NHibernate collections: many-to-many relationships

    - by Brad Heller
    I've got two models, a Product model and a ShoppingCart model. The ShoppingCart model has a collection of products as a property called Products (List). Here is the mapping for my ShoppingCart model. <class name="MyProject.ShoppingCart, MyProject" table="ShoppingCarts"> <id name="Id" column="Id"> <generator class="native" /> </id> <many-to-one name="Company" class="MyProject.Company, MyProject" column="CompanyId" /> <property name="ExternalId" column="GUID" generated="insert" /> <property name="Name" column="Name" /> <property name="Total" column="Total" /> <property name="CreationDate" column="CreationDate" generated="insert" /> <property name="UpdatedDate" column="UpdatedDate" generated="always" /> <bag name="Products" table="ShoppingCartContents" lazy="false"> <key column="ShoppingCartId" /> <many-to-many column="ProductId" class="MyProjectMyProject.Product, MyProject" fetch="join" /> </bag> </class> When I try to save to the DB, the ShoppingCart is saved, but the mapping rows in ShoppingCartContents aren't save, making me thing that there's an issue with the mapping. Where am I going wrong here?

    Read the article

  • java singleton instantiation

    - by jurchiks
    I've found three ways of instantiating a Singleton, but I have doubts as to whether any of them is the best there is. I'm using them in a multi-threaded environment and prefer lazy instantiation. Sample 1: private static final ClassName INSTANCE = new ClassName(); public static ClassName getInstance() { return INSTANCE; } Sample 2: private static class SingletonHolder { public static final ClassName INSTANCE = new ClassName(); } public static ClassName getInstance() { return SingletonHolder.INSTANCE; } Sample 3: private static ClassName INSTANCE; public static synchronized ClassName getInstance() { if (INSTANCE == null) INSTANCE = new ClassName(); return INSTANCE; } The project I'm using ATM uses Sample 2 everywhere, but I kind of like Sample 3 more. There is also the Enum version, but I just don't get it. The question here is - in which cases I should/shouldn't use any of these variations? I'm not looking for lengthy explanations though (there's plenty of other topics about that, but they all eventually turn into arguing IMO), I'd like it to be understandable with few words.

    Read the article

  • .Net Entity Framework & POCO ... querying full table problem

    - by Chris Klepeis
    I'm attempting to implement a repository pattern with my poco objects auto generated from my edmx. In my repository class, I have: IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery(Func<E, bool> where) { return objectSet.Where(where).AsQueryable<E>(); } public IList<E> SelectAll(Func<E, bool> where) { return GetQuery(where).ToList(); } Where E is the one of my POCO classes. When I trace the database and run this: IList<Contact> c = contactRepository.SelectAll(r => r.emailAddress == "[email protected]"); It shows up in the sql trace as a select for everything in my Contact table. Where am I going wrong here? Is there a better way to do this? Does an objectset not lazy load... so it omitted the where clause? This is the article I read which said to use objectSet's... since with POCO, I do not have EntityObject's to pass into "E" http://devtalk.dk/CommentView,guid,b5d9cad2-e155-423b-b66f-7ec287c5cb06.aspx

    Read the article

  • Problems using (building?) native gem extensions on OS X

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • JoinColumn name not used in sql

    - by Vladimir
    Hi! I have a problem with mapping many-to-one relationship without exact foreign key constraint set in database. I use OpenJPA implementation with MySql database, but the problem is with generated sql scripts for insert and select statements. I have LegalEntity table which contains RootId column (among others). I also have Address table which has LegalEntityId column which is not nullable, and which should contain values referencing LegalEntity's "RootId" column, but without any database constraint (foreign key) set. Address entity is mapped: @Entity @Table(name="address") public class Address implements Serializable { ... @ManyToOne(fetch=FetchType.LAZY, optional=false) @JoinColumn(referencedColumnName="RootId", name="LegalEntityId", nullable=false, insertable=true, updatable=true, table="LegalEntity") public LegalEntity getLegalEntity() { return this.legalEntity; } } SELECT statement (when fetching LegalEntity's addresses) and INSERT statment are generated: SELECT t0.Id, .., t0.LEGALENTITY_ID FROM address t0 WHERE t0.LEGALENTITY_ID = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (..., LEGALENTITY_ID) VALUES (..., ?) [params=..., (int) 2] If I omit table attribute from mentioned statements are generated: SELECT t0.Id, ... FROM address t0 INNER JOIN legalentity t1 ON t0.LegalEntityId = t1.RootId WHERE t1.Id = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (...) VALUES (...) [params=...] So, LegalEntityId is not included in any of the statements. Is it possible to have relationship based on such referencing (to column other than primary key, without foreign key in database)? Is there something else missing? Thanks in advance.

    Read the article

  • Implementing a c/c++ style union as a column in MySQL

    - by user81338
    Friends, I have a strange need and cannot think my way through the problem. The great and mighty Google is of little help due to keyword recycling (as you'll see). Can you help? What I want to do is store data of multiple types in a single column in MySQL. This is the database equivalent to a C union (and if you search for MySQL and Union, you obviously get a whole bunch of stuff on the UNION keyword in SQL). [Contrived and simplified case follows] So, let us say that we have people - who have names - and STORMTROOPERS - who have TK numbers. You cannot have BOTH a NAME and a TK number. You're either BOB SMITH -or- TK409. In C I could express this as a union, like so: union { char * name; int tkNo; } EmperialPersonnelRecord; This makes it so that I am either storing a pointer to a char array or an ID in the type EmperialPersonnelRecord, but not both. I am looking for a MySQL equivalent on a column. My column would store either an int, double, or varchar(255) (or whatever combination). But would only take up the space of the largest element. Is this possible? (of course anything is possible given enough time, money and will - I mean is it possible if I am poor, lazy and on a deadline... aka "out of the box")

    Read the article

  • Nhibernate - getting single column from other table

    - by Muhammad Akhtar
    I have following tables Employee: ID,CompanyID,Name //CompanyID is foriegn key of Company Table Company: CompanyID, Name I want to map this to the following class: public class Employee { public virtual Int ID { get; set; } public virtual Int CompanyID { get; set; } public virtual string Name { get; set; } public virtual string CompanyName { get; set; } protected Employee() { } } here is my xml class <class name="Employee" table="Employee" lazy="true"> <id name="Id" type="Int32" column="Id"> <generator class="native" /> </id> <property name="CompanyID" column="CompanyID" type="Int32" not-null="false"/> <property name="Name" column="Name" type="String" length="100" not-null="false"/> What I need to add in xml class to map CompanyName in my result? here is my code... public ArrayList getTest() { ISession session = NHibernateHelper.GetCurrentSession(); string query = "select Employee.*,(Company.Name)CompanyName from Employee inner join Employee on Employee.CompanyID = Company.CompanyID"; ArrayList document = (ArrayList)session.CreateSQLQuery(query, "Employee", typeof(Document)).List(); return document; } but in the returned result, I am getting CompanyName is null is result set and other columns are fine. Note:In DB, tables don't physical relation Please suggest my solution ------ Thanks

    Read the article

  • mmap only needed pages of kernel buffer to user space

    - by axeoth
    See also this answer: http://stackoverflow.com/a/10770582/1284631 I need something similar, but without having to allocate a buffer: the buffer is large, in theory, but the user space program only needs to access some parts of it (it mocks some registers of a hardware). As I cannot allocate with vmalloc_user() such a large buffer (kernel 32 bit, in embedded environment, no swap...), I followed the same approach as in the quoted answer, trying to allocate only those pages that are really requested by the user space. So: I use a my_mmap() function for the device file (actually, is the .mmap field of a struct uio_info) to set up the fields of the vma, then, in the vm_area_struct's .fault field (also named my_fault()), I should return a page. except that: In the my_fault() method of vm_area_struct, I cannot obtain a page through: vmf->page=vmalloc_to_page(my_buf + (vmf->pgoff << PAGE_SHIFT)); since there is no allocated buffer: my_buf = vmalloc_user(MY_BUF_SIZE); fails with "allocation failed: out of vmalloc space - use vmalloc= to increase size." (and there is no room or swap to increase that vmalloc= parameter). So, I would need to get a page from the kernel and fill the vmf->page field. How to allocate a page (I assume that the offset of the page is known, as it is vm->pgoff). What base memory should I use instead of my_buf? PS: I also did set up the vma->flags |= VM_NORESERVE; (in the my_mmap()), but not sure if it helps. Is there any vmalloc_user_unreserved()-like function? (let's say, lazy allocation) Also, writing 1 to /proc/sys/vm/overcommit_memory and large values (eg 500) to /proc/sys/vm/overcommit_ratio before trying to my_buf=vmalloc_user(<large_size>) didn't work.

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • NHibernate query against the key field of a dictionary (map)

    - by Carl Raymond
    I have an object model where a Calendar object has an IDictionary<MembershipUser, Perms> called UserPermissions, where MembershipUser is an object, and Perms is a simple enumeration. This is in the mapping file for Calendar as <map name="UserPermissions" table="CalendarUserPermissions" lazy="true" cascade="all"> <key column="CalendarID"/> <index-many-to-many class="MembershipUser" column="UserGUID" /> <element column="Permissions" type="CalendarPermission" not-null="true" /> </map> Now I want to execute a query to find all calendars for which a given user has some permission defined. The permission is irrelevant; I just want a list of the calendars where a given user is present as a key in the UserPermissions dictionary. I have the username property, not a MembershipUser object. How do I build that using QBC (or HQL)? Here's what I've tried: ISession session = SessionManager.CurrentSession; ICriteria calCrit = session.CreateCriteria<Calendar>(); ICriteria userCrit = calCrit.CreateCriteria("UserPermissions.indices"); userCrit.Add(Expression.Eq("Username", username)); return calCrit.List<Calendar>(); This constructed invalid SQL -- the WHERE clause contained WHERE membership1_.Username = @p0 as expected, but the FROM clause didn't include the MemberhipUsers table. Also, I really had to struggle to learn about the .indices notation. I found it by digging through the NHibernate source code, and saw that there's also .elements and some other dotted notations. Where's a reference to the allowed syntax of an association path? I feel like what's above is very close, and just missing something simple.

    Read the article

  • How to increase PermGen memory for eclipselink StaticWeaveAntTask

    - by rayd09
    We are using Eclipselink and need to weave the code in order for lazy fetching to work property. During the weave process I'm getting the following error: weave: BUILD FAILED java.lang.OutOfMemoryError: PermGen space I have the following tasks within my ant build file: <target name="define_weave_task" description="task definition for EclipseLink static weaving"> <taskdef name="eclipse_weave" classname="org.eclipse.persistence.tools.weaving.jpa.StaticWeaveAntTask"/> </target> <target name="weave" depends="compile,define_weave_task" description="weave eclipselink code into compiled classes"> <eclipse_weave source="${path.classes}" target="${path.classes}"> <classpath refid="compile.classpath"/> </eclipse_weave> </target> It has been working great for a long time. Now that the amount of code to be woven has increased I'm getting the PermGen error. I would like to be able to up the amount of perm space. If I was doing a compile I would be able to up the perm space via a compiler argument such as <compilerarg value="-XX:MaxPermSize=256M"/> but this does not appear to be a valid argument for eclipselink weaving. How can I up the perm space for the weave?

    Read the article

  • What is best practice about having one-many hibernate

    - by Patrick
    Hi all, I believe this is a common scenario. Say I have a one-many mapping in hibernate Category has many Item Category: @OneToMany( cascade = {CascadeType.ALL},fetch = FetchType.LAZY) @JoinColumn(name="category_id") @Cascade( value = org.hibernate.annotations.CascadeType.DELETE_ORPHAN ) private List<Item> items; Item: @ManyToOne(targetEntity=Category.class,fetch=FetchType.EAGER) @JoinColumn(name="category_id",insertable=false,updatable=false) private Category category; All works fine. I use Category to fully control Item's life cycle. But, when I am writing code to update Category, first I get Category out from DB. Then pass it to UI. User fill in altered values for Category and pass back. Here comes the problem. Because I only pass around Category information not Item. Therefore the Item collection will be empty. When I call saveOrUpdate, it will clean out all associations. Any suggestion on what's best to address this? I think the advantage of having Category controls Item is to easily main the order of Item and not to confuse bi-directly. But what about situation that you do want to just update Category it self? Load it first and merge? Thank you.

    Read the article

  • Object equality in context of hibernate / webapp

    - by bert
    How do you handle object equality for java objects managed by hibernate? In the 'hibernate in action' book they say that one should favor business keys over surrogate keys. Most of the time, i do not have a business key. Think of addresses mapped to a person. The addresses are keeped in a Set and displayed in a Wicket RefreshingView (with a ReuseIfEquals strategy). I could either use the surrogate id or use all fields in the equals() and hashCode() functions. The problem is that those fields change during the lifetime ob the object. Either because the user entered some data or the id changes due to JPA merge() being called inside the OSIV (Open Session in View) filter. My understanding of the equals() and hashCode() contract is that those should not change during the lifetime of an object. What i have tried so far: equals() based on hashCode() which uses the database id (or super.hashCode() if id is null). Problem: new addresses start with an null id but get an id when attached to a person and this person gets merged() (re-attached) in the osiv-filter. lazy compute the hashcode when hashCode() is first called and make that hashcode @Transitional. Does not work, as merge() returns a new object and the hashcode does not get copied over. What i would need is an ID that gets assigned during object creation I think. What would be my options here? I don't want to introduce some additional persistent property. Is there a way to explicitly tell JPA to assign an ID to an object? Regards

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >