Search Results

Search found 1162 results on 47 pages for 'ugettext lazy'.

Page 38/47 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • partial entity loading and management in silverlight / wcf ria

    - by Dan Wray
    I have a Silverlight 4 app which pulls entities down from a database using WCF RIA services. These data objects are fairly simple, just a few fields but one of those fields contains binary data of an arbitrarily size. The application needs access to this data basically asap after a user has logged in, to display in a list, enable selection etc. My problem is because of the size of this data, the load times are not acceptable and can approach the default timeout of the RIA service. I'd like to somehow partially load the objects into my local data context so that I have the IDs, names etc but not the binary data. I could then at a later point (ie when it's actually needed) populate the binary fields of those objects I need to display. Any suggestions on how to accomplish this would be welcome. Another approach which has occurred to me whilst writing this question (how often does that happen?!) is that I could move the binary data into a seperate database table joined to the original record 1:1 which would allow me to make use of RIA's lazy loading on that binary data. again.. comments welcome! Thanks.

    Read the article

  • response.Text is undefined when returning variable

    - by George
    Not sure if the problem is related to Ajax or something silly about JavaScript that I'm overlooking in general, but I have the following script where fox.html is just plain text that reads, "The quick brown fox jumped over the lazy dog." : function loadXMLDoc() { if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET","fox.html",true); xmlhttp.send(); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { fox = xmlhttp.responseText; alert(fox); } } } onload = loadXMLDoc; The above script alerts the contents of fox.html onload just fine. However if I change the script so that: { fox = xmlhttp.responseText; alert(fox); } becomes: { fox = xmlhttp.responseText; return fox; } and alert(loadXMLDoc()); onload I get 'undefined'. I'm wondering why this is so.

    Read the article

  • Setting spring bean property value using ref-bean

    - by Apache Fan
    Hi, I am trying to set a property value using spring. <bean id="velocityPropsBean" class="com.test.CustomProperties" abstract="false" singleton="true" lazy-init="false" autowire="default" dependency-check="default"> <property name="properties"> <props> <prop key="resource.loader">file</prop> <prop key="file.resource.loader.cache">true</prop> <prop key="file.resource.loader.class">org.apache.velocity.runtime.resource.loader.FileResourceLoader</prop> <prop key="file.resource.loader.path">NEED TO INSERT VALUE AT STARTUP</prop> </props> </property> </bean> <bean id="velocityResourcePath" class="java.lang.String" factory-bean="velocityHelper" factory-method="getLoaderPath"/> Now what i need to do is insert the result from getLoaderPath into file.resource.loader.path. The value of getLoaderPath changes so it has to be loaded at server startup. Any thoughts how i can inset the velocityResourcePath value to the property?

    Read the article

  • Why delete-orphan needs "cascade all" to run in JPA/Hibernate ?

    - by Jerome C.
    Hello, I try to map a one-to-many relation with cascade "remove" (jpa) and "delete-orphan", because I don't want children to be saved or persist when the parent is saved or persist (security reasons due to client to server (GWT, Gilead)) But this configuration doesn't work. When I try with cascade "all", it runs. Why the delete-orphan option needs a cascade "all" to run ? here is the code (without id or other fields for simplicity, the class Thread defines a simple many-to-one property without cascade): when using the removeThread function in a transactional function, it does not run but if I edit cascade.Remove into cascade.All, it runs. @Entity public class Forum { private List<ForumThread> threads; /** * @return the topics */ @OneToMany(mappedBy = "parent", cascade = CascadeType.REMOVE, fetch = FetchType.LAZY) @Cascade(org.hibernate.annotations.CascadeType.DELETE_ORPHAN) public List<ForumThread> getThreads() { return threads; } /** * @param topics the topics to set */ public void setThreads(List<ForumThread> threads) { this.threads = threads; } public void addThread(ForumThread thread) { getThreads().add(thread); thread.setParent(this); } public void removeThread(ForumThread thread) { getThreads().remove(thread); } } thanks.

    Read the article

  • Is functional GUI programming possible?

    - by eman
    I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to). The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks. My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

    Read the article

  • Web API Getting Http 500 error : Issue Solved See Below

    - by Joe Grasso
    Here is my MVC Controller and everything is fine: private UnitOfWork UOW; public InventoryController() { UOW = new UnitOfWork(); } // GET: /Inventory/ public ActionResult Index() { var products = UOW.ProductRepository.GetAll().ToList(); return View(products); } Same method call in API Controller gives me an Http 500 Error: private UnitOfWork _unitOfWork; public TestController() { _unitOfWork = new UnitOfWork(); } public IEnumerable<Product> Get() { var products = _unitOfWork.ProductRepository.GetAll().ToList(); return products; } Debugging shows that indeed there is data being returned in both controllers' UOW calls. I then added a customer configuration in Global: public static void CustomizeConfig(HttpConfiguration config) { config.Formatters.Remove(config.Formatters.XmlFormatter); var json = config.Formatters.JsonFormatter; json.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); } I am still receiving an Http 500 in API Controller ONLY and at a loss as to why. Any ideas? UPDATE: It appears using lazy loading caused the problem. When I set the associated properties to NON-VIRTUAL the Test API provided the necessary JSON string. However, whereas before I had the Vendor class included, I only have VendorId. I really wanted to included the associated classes. Any ideas? I know there are alot of smart people out there. Anyone?

    Read the article

  • [hibernate - jpa] @CollectionOfElements without create the apposite table

    - by blow
    Hi all. I have this: Municipality class @Entity public class Municipality implements Serializable { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; private String country; private String province; private String name; @Column(name="cod_catasto") private String codCatastale; private String cap; @CollectionOfElements private List<Address> addressList; public Municipality() { } ... Address class @Embeddable public class Address implements Serializable { @ManyToOne(fetch=FetchType.LAZY) @Cascade(CascadeType.SAVE_UPDATE) private Municipality municipality; @Column(length=45) private String address; public Address() { } ... Address is embedded in another class Person. When i save an instance of Person, hibernate create 3 tables: PERSON, MUNICIPALITY and MUNICIPALITY_ADDRESSLIST. MUNICIPALITY_ADDRESSLIST contains 2 fields: MUNICIPALITY_ID (FK) and STREET. I don't want this table, i only want the ID of table MUNICIPALITY into table PERSON(that embeds Address), what should i do? I tried to add @JoinTable in Municipality entity like this: @CollectionOfElements @JoinTable(name="person") private List<Address> addressList; It partially worked, but i cant choose the column name of table PERSON that contains ID of the table MUNICIPALITY, it is, by hibernate choose, simply "MUNICIPALITY_ID"... Thbaks.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • Aggregate Pattern and Performance Issues

    - by Mosh
    Hello, I have read about the Aggregate Pattern but I'm confused about something here. The pattern states that all the objects belonging to the aggregate should be accessed via the Aggregate Root, and not directly. And I'm assuming that is the reason why they say you should have a single Repository per Aggregate. But I think this adds a noticeable overhead to the application. For example, in a typical Web-based application, what if I want to get an object belonging to an aggregate (which is NOT the aggregate root)? I'll have to call Repository.GetAggregateRootObject(), which loads the aggregate root and all its child objects, and then iterate through the child objects to find the one I'm looking for. In other words, I'm loading lots of data and throwing them out except the particular object I'm looking for. Is there something I'm missing here? PS: I know some of you may suggest that we can improve performance with Lazy Loading. But that's not what I'm asking here... The aggregate pattern requires that all objects belonging to the aggregate be loaded together, so we can enforce business rules.

    Read the article

  • Subsonic, child records, and collections

    - by Dane
    Hi, I've been working with subsonic for a few weeks now, and it is working really well. However, I've just run into an issue with child objects with additional partial properties. Some of it is probably me just not understanding the .Net object lifecycle. I have an object - search. This has a few properties like permissions and stuff, and it links to a child table called search_options. In my Asp.Net app, it loops through these search options and creates controls. Then on postback, it grabs the values and assigns it back to a "value" property on the search_option. This value property is a simple string that's defined in a partial class. I then want to create a method on the search object, called PerformSearch. This then loops through the child search_options, and performs a custom query based on the "value" property. However, even though I assign the "value" property to the child search_option, when I access it later via the search.search_options collection, it is null. I'm guessing that maybe because it's accessing it in two different places, it performs another lazy load from the DB and the value is lost? Is there a way to tell the class that it's already loaded or something? or a way to access it so it's not reloaded from the DB? Code is below (shitty pseudocode, not full version) : ASP.Net page, loading back the values from postback : dim obj_search as search = new subsonic.query.select().......' retrieves the search object for each opt as search_option in obj_search.search_options opt.Value = Ctype(FindControl("search_option_" + opt.search_option_id),Textbox).Text debug.print(opt.Value) ' value is correct next for each opt as search_option in obj_search.search_options debug.print(opt.Value) 'this is nothing next Now, the partial class : public partial class search_option private m_value as string public property Value() as string get return m_value end get set( byval value as string) m_value = value end set end property end class

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Understanding Domain Driven Design

    - by Nihilist
    Hi I have been trying to understand DDD for few weeks now. Its very confusing. I dont understand how I organise my projects. I have lot of questions on UnitOfWork, Repository, Associations and the list goes on... Lets take a simple example. Album and Tracks. Album: AlbumId, Name, ListOf Tracks Tracks: TrackId, Name Question1: Should i expose Tracks as a IList/IEnumerabe property on Album ? If that how do i add an album ? OR should i expose a ReadOnlyCollection of Tracks and expose a AddTrack method? Question2: How do i load Tracks for Album [assuming lazy loading]? should the getter check for null and then use a repository to load the tracks if need be? Question3: How do we organise the assemblies. Like what does each assembly have? Model.dll - does it only have the domain entities? Where do the repositories go? Interfaces and implementations both. Can i define IAlbumRepository in Model.dll? Infrastructure.dll : what shold this have? Question4: Where is unit of work defined? How do repository and unit of work communicate? [ or should they ] for example. if i need to add multiple tracks to album, again should this be defined as AddTrack on Album OR should there a method in the repository? Regardless of where the method is, how do I implement unit of work here? Question5: Should the UI use Infrastructure..dll or should there be ServiceLayer? Do my quesitons make sense? Regards

    Read the article

  • .Net Entity objectcontext thread error

    - by Chris Klepeis
    I have an n-layered asp.net application which returns an object from my DAL to the BAL like so: public IEnumerable<SourceKey> Get(SourceKey sk) { var query = from SourceKey in _dataContext.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query.AsEnumerable(); } This result passes through my business layer and hits the UI layer to display to the end users. I do not lazy load to prevent query execution in other layers of my application. I created another function in my DAL to delete objects: public void Delete(SourceKey sk) { try { _dataContext.DeleteObject(sk); _dataContext.SaveChanges(); } catch (Exception ex) { Debug.WriteLine(ex.Message + " " + ex.StackTrace + " " + ex.InnerException); } } When I try to call "Delete" after calling the "Get" function, I receive this error: New transaction is not allowed because there are other threads running in the session This is an ASP.Net app. My DAL contains an entity data model. The class in which I have the above functions share the same _dataContext, which is instantiated in my constructor. My guess is that the reader is still open from the "Get" function and was not closed. How can I close it?

    Read the article

  • tips for fixing bad coding/dev habits ?

    - by dfafa
    i want to become a better coder....so i have decided to sign up for computing science program...maybe a formal education can assist me. i started working on smaller projects to learn but currently i have really bad coding/dev habits which is hindering my productivity as the codebase increases.... i have highlighted them and perhaps someone could make suggestions (or redirect to resources) or a more efficient method. most stuff that i made in the past were web apps. i usually develop with putty + nano...i just love the minimalist feel i use winscp and develop directly on my private web server...too lazy to do it on localhost and upload it later. i dont use subversion control...which one do i need ? sometimes ctrl +z doesn't work well. when i run out of ideas for naming variable, i use swear words instead. i swear a lot when i get stuck....how to deal with anger issue ? my codes look ugly with comments everywhere. would rather use procedural coding finds "thinking" in OO difficult and time consuming i "write first think later". refactors code only if i am getting paid for it. dislikes configuring linux distro, Apache, MySQL, scaling, designing graphics and layouts. does not like writing tests likes working alone. does not like sharing codes. has an econ degree dislikes reading other people's code would rather write it on my own it seems my only true desire is to translate my ideas to a working prototype as fast as possible....it seems like i am very uninterested in the other details...could it be that i am not cut out to be a coder after all ? is going back to study comp sci a bad idea ?

    Read the article

  • Mac vs. Windows Browser Font Height Rendering Issue

    - by cdmckay
    I'm using a custom font and the @font-face tag. In Windows, everything looks great, regardless of whether it's Firefox, Chrome, or IE. On Mac, it's a different story. For some reason, the Mac font renderer thinks the font is a lot shorter than it is. For example, consider this test code (live example here): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Webble</title> <style type="text/css"> @font-face { font-family: "Bubbleboy 2"; src: url("bubbleboy-2.ttf") format('truetype'); } body { font-family: "Bubbleboy 2"; font-size: 30px; } div { background-color: maroon; color: yellow; height: 100px; line-height: 100px; } </style> </head> <body> <div>The quick brown fox jumped over the lazy dog.</div> </body> </html> Open it on Windows Firefox and on Mac Firefox. Use your mouse to select it. On Windows, you'll notice it fully selects the font. On Mac, it only selects about half the font. If you look at what it is selecting, you'll see that that part has been centered, instead of the full height of the font. Is there anyway to fix this rather large discrepancy?

    Read the article

  • C# POCO T4 template, generate interfaces?

    - by Jonna
    Does anyone know of any tweaked version of POCO T4 template that generates interfaces along with classes? i.e. if I have Movie and Actor entities in .edmx file, I need to get the following classes and interfaces. interface IMovie { string MovieName { get; set; } ICollection<IActor> Actors { get; set; } //instead of ICollection<Actor> } class Movie : IMovie { string MovieName { get; set; } ICollection<IActor> Actors { get; set; } //instead of ICollection<Actor> } interface IActor { string ActorName { get; set; } } class Actor { string ActorName { get; set; } } Also, just in case I write my own entities, does POCO proxies(I need them for lazy loading) work with the interface declarations as shown above?

    Read the article

  • inserting a form to session raises picklingerror - django

    - by shanyu
    I receive an exception when I add a form to the session: PicklingError: Can't pickle <class 'django.utils.functional.__proxy__'>: attribute lookup django.utils.functional.__proxy__ failed The form includes a few simple fields and has some javascript attached to a widget. It might be that Django forms cannot be pickled at all, but the exception seems to point to unicode lazy translation. To test further, I have also tried to insert only the form errors (an errordict) to the session and received the same error. I appreciate some help here, thanks in advance. EDIT: Here's why I insert a form into the session: I have an app that has a form. This form is rendered by a template tag in another app. When posted, if the form is valid, no problem, I do stuff and redirect to "next". However if it is not valid, I want to go back to the posting page to show errors. Recall that the comments app in this case redirects to an intermediate "hey, please fix the errors" page. I am trying to avoid this, and hence redirect back to the posting page with the form and its errors in the session that the template tag will render.

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • Bulk insert of component collection in Hibernate?

    - by edbras
    I have the mapping as listed below. When I update a detached Categories item (that doesn't contain any Hibernate class as it comes from a dto converter) I notice that Hibernate will first delete ALL employer wages instances (the collection link) and then insert ALL employer wage entries ONE-BY-ONE :(... I understand that it has to delete and then insert all entries as it was completely detached. BUT, what I don't understand, why is Hibernate NOT inserting all the entries through bulk-insert?.. That is: inserting all the employer wage entries all in one SQL statement ? How can I tell Hibernate to use bulk-insert? (if possible). I tried playing with the following value but didn't see any difference: hibernate.jdbc.batch_size=30 My mapping snippet: <class name="com.sample.CategoriesDefault" table="dec_cats" > <id name="id" column="id" type="string" length="40" access="property"> <generator class="assigned" /> </id> <component name="incomeInfoMember" class="com.sample.IncomeInfoDefault"> <property name="hasWage" type="boolean" column="inMemWage"/> ... <component name="wage" class="com.sample.impl.WageDefault"> <property name="hasEmployerWage" type="boolean" column="inMemEmpWage"/> ... <set name="employerWages" cascade="all-delete-orphan" lazy="false"> <key column="idCats" not-null="true" /> <one-to-many entity-name="mIWaEmp"/> </set> </component> </component> </class>

    Read the article

  • Does O2Micro Oz776 Smartcard reader support SLE5528 cards?

    - by Proton
    Well, the title seems indicating I'm a lazy guy but it's not the truth... I'm messing with this weird circumstance a whole day. My laptop is a Dell Latitude D630 which has a Oz776 (usb idVendor == 0x0b97, idProduct == 0x7772) smartcard reader, but I'm not sure if it is functioning well. It can successfully recognize my Gemplus GPK4000 smartcard and my SIM card, but not SLE5528. This is my pcscd log when insert the GPK4000: 06039271 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000100 - 000000 62 00 00 00 00 00 14 01 00 00 00967744 <- 000000 80 0A 00 00 00 00 14 00 00 00 3B 27 00 80 65 A2 0C 01 01 37 00000048 ATR: 3B 27 00 80 65 A2 0C 01 01 37 00000013 atrhandler.c:102:ATRDecodeAtr() Conv: 01, Y1: 02, K: 07 00000011 atrhandler.c:120:ATRDecodeAtr() TA1: FFFFFFFF, TB1: 00, TC1: FFFFFFFF, TD1:FFFFFFFF 00000011 atrhandler.c:248:ATRDecodeAtr() CurrentProtocol: 1, AvailableProtocols: 1 00000062 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000014 Card ATR: 3B 27 00 80 65 A2 0C 01 01 37 29016873 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 This is the log when insert a SLE5528 card: 99999999 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000048 - 000000 62 00 00 00 00 00 11 01 00 00 ** Then it chokes here, when I remove the card, the log continues ** 04741980 <- 000000 80 00 00 00 00 00 11 42 FE 00 00000044 commands.c:225:CmdPowerOn Card absent or mute 00000017 ifdhandler.c:1096:IFDHPowerICC() PowerUp failed 00000082 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000021 eventhandler.c:443:EHStatusHandlerThread() Error powering up card. 00402818 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 I found that SLE5528 is ISO7816 compatible, and it should have ATR, but it just chokes at the PowerUp. When inserted any PVC card with no chip or AT24C01 card, it would not choke but report immediate PowerUp failure. When I tried it on Windows(Windows 7, "runas other user", smartcard login), it chokes too while PVC cards and AT24C01 report immediate failure.

    Read the article

  • problem with evolutionary algorithms degrading into simulated annealing: mutation too small?

    - by Schnalle
    i have a problem understanding evolutionary algorithms. i tried using this technique several times, but i always ran into the same problem: degeneration into simulated annealing. lets say my initial population, with fitness in brackets, is: A (7), B (9), C (14), D (19) after mating and mutation i have following children: AB (8.3), AC (12.2), AD (14.1), BC(11), BD (14.7), CD (17) after elimination of the weakest, we get A, AB, B, AC next turn, AB will mate again with a result around 8, pushing AC out. next turn, AB again, pushing B out (assuming mutation changes fitness mostly in the 1 range). now, after only a few turns the pool is populated with the originally fittest candidates (A, B) and mutations of those two (AB). this happens regardless of the size of the initial pool, it just takes a bit longer. say, with an initial population of 50 it takes 50 turns, then all others are eliminated, turning the whole setup in a more complicated simulated annealing. in the beginning i also mated canditates with themselves, worsening the problem. so, what do i miss? are my mutation rates simply too small and will it go away if i increase them? here's the project i'm using it for: http://stefan.schallerl.com/simuan-grid-grad/ yeah, the code is buggy and the interface sucks, but i'm too lazy to fix it right now - and be careful, it may lock up your browser. better use chrome, even thought firefox is not slower than chrome for once (probably the tracing for the image comparison pays off, yay!). if anyone is interested, the code can be found here. here i just dropped the ev-alg idea and went for simulated annealing. ps: i'm not even sure about simulated annealing - it is like evolutionary algorithms, just with a population size of one, right?

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is 1 wire communication possible?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • 'Bank Switching' Sprites on old NES applications

    - by Jeffrey Kern
    I'm currently writing in C# what could basically be called my own interpretation of the NES hardware for an old-school looking game that I'm developing. I've fired up FCE and have been observing how the NES displayed and rendered graphics. In a nutshell, the NES could hold two bitmaps worth of graphical information, each with the dimensions of 128x128. These are called the PPU tables. One was for BG tiles and the other was for sprites. The data had to be in this memory for it to be drawn on-screen. Now, if a game had more graphical data then these two banks, it could write portions of this new information to these banks -overwriting what was there - at the end of each frame, and use it from the next frame onward. So, in old games how did the programmers 'bank switch'? I mean, within the level design, how did they know which graphic set to load? I've noticed that Mega Man 2 bankswitches when the screen programatically scrolls from one portion of the stage to the next. But how did they store this information in the level - what sprites to copy over into the PPU tables, and where to write them at? Another example would be hitting pause in MM2. BG tiles get over-written during pause, and then get restored when the player unpauses. How did they remember which tiles they replaced and how to restore them? If I was lazy, I could just make one huge static bitmap and just grab values that way. But I'm forcing myself to limit these values to create a more authentic experience. I've read the amazing guide on how M.C. Kids was made, and I'm trying to be barebones about how I program this game. It still just boggles my mind how these programmers accomplisehd what they did with what they had. EDIT: The only solution I can think of would be to hold separate tables that state what tiles should be in the PPU at what time, but I think that would be a huge memory resource that the NES wouldn't be able to handle.

    Read the article

  • Silverlight 3 data-binding child property doesn't update

    - by sonofpirate
    I have a Silverlight control that has my root ViewModel object as it's data source. The ViewModel exposes a list of Cards as well as a SelectedCard property which is bound to a drop-down list at the top of the view. I then have a form of sorts at the bottom that displays the properties of the SelectedCard. My XAML appears as (reduced for simplicity): <StackPanel Orientation="Vertical"> <ComboBox DisplayMemberPath="Name" ItemsSource="{Binding Path=Cards}" SelectedItem="{Binding Path=SelectedCard, Mode=TwoWay}" /> <TextBlock Text="{Binding Path=SelectedCard.Name}" /> <ListBox DisplayMemberPath="Name" ItemsSource="{Binding Path=SelectedCard.PendingTransactions}" /> </StackPanel> I would expect the TextBlock and ListBox to update whenever I select a new item in the ComboBox, but this is not the case. I'm sure it has to do with the fact that the TextBlock and ListBox are actually bound to properties of the SelectedCard so it is listening for property change notifications for the properties on that object. But, I would have thought that data-binding would be smart enough to recognize that the parent object in the binding expression had changed and update the entire binding. It bears noting that the PendingTransactions property (bound to the ListBox) is lazy-loaded. So, the first time I select an item in the ComboBox, I do make the async call and load the list and the UI updates to display the information corresponding to the selected item. However, when I reselect an item, the UI doesn't change! For example, if my original list contains three cards, I select the first card by default. Data-binding does attempt to access the PendingTransactions property on that Card object and updates the ListBox correctly. If I select the second card in the list, the same thing happens and I get the list of PendingTransactions for that card displayed. But, if I select the first card again, nothing changes in my UI! Setting a breakpoint, I am able to confirm that the SelectedCard property is being updated correctly. How can I make this work???

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >