Search Results

Search found 20211 results on 809 pages for 'language implementation'.

Page 678/809 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • Visual C++ overrides/mock objects for unit testing?

    - by Mark
    When I'm running unit tests, I want to be able to "stub out" or create a mock object, but I'm running into DLL Hell. For example: There are two DLL libraries built: A.dll and B.dll -- Classes in A.dll have calls to classes in B.dll so when A.dll was built, the link line was using B.lib for the defintions. My test driver (Foo.exe) is testing classes in A.dll, so it links against A.lib. However, I want to "stub out" some of the calls A.dll makes to B.dll with simple versions (return basic value, no DB look up, etc). I can't build an Override.dll that just overrides the needed methods (not entire classes) and replace B.dll because Foo.exe will A) complain that B.dll is missing if I just remove it and put Override.dll in it's place or B) if I rename Override.dll to B.dll, Foo.exe complains that there are unresolved symbols because Override.dll is not a complete implementation of B.dll. Is there a way to do this? Is there a way to statically link Foo.exe with A.lib, B.lib and Override.lib such that it will work without having to completely rebuild A.lib and B.lib to remove the __delcspec(dllexport)? Is there another option?

    Read the article

  • asp.net jquery how to use Plugin/Validation with web content

    - by Eyla
    I have a asp.net web content from that have a asp.net textbox and I want to use Plugin/Validation but it is not working with me here is my code: <%@ Page Title="" Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="IMAM_APPLICATION.WebForm1" %> <%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="asp" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script src="js/jquery-1.4.1.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $.validator.addMethod("#<%=TextBox1.ClientID %>", function(value, element) { return this.optional(element) || /^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,16}$/i.test(value); }, "Passwords are 8-16 characters with uppercase letters, lowercase letters and at least one number."); }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="ContentPlaceHolder2" runat="server"> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> </asp:Content>

    Read the article

  • JoinColumn name not used in sql

    - by Vladimir
    Hi! I have a problem with mapping many-to-one relationship without exact foreign key constraint set in database. I use OpenJPA implementation with MySql database, but the problem is with generated sql scripts for insert and select statements. I have LegalEntity table which contains RootId column (among others). I also have Address table which has LegalEntityId column which is not nullable, and which should contain values referencing LegalEntity's "RootId" column, but without any database constraint (foreign key) set. Address entity is mapped: @Entity @Table(name="address") public class Address implements Serializable { ... @ManyToOne(fetch=FetchType.LAZY, optional=false) @JoinColumn(referencedColumnName="RootId", name="LegalEntityId", nullable=false, insertable=true, updatable=true, table="LegalEntity") public LegalEntity getLegalEntity() { return this.legalEntity; } } SELECT statement (when fetching LegalEntity's addresses) and INSERT statment are generated: SELECT t0.Id, .., t0.LEGALENTITY_ID FROM address t0 WHERE t0.LEGALENTITY_ID = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (..., LEGALENTITY_ID) VALUES (..., ?) [params=..., (int) 2] If I omit table attribute from mentioned statements are generated: SELECT t0.Id, ... FROM address t0 INNER JOIN legalentity t1 ON t0.LegalEntityId = t1.RootId WHERE t1.Id = ? ORDER BY t0.Id DESC [params=(int) 2] INSERT INTO address (...) VALUES (...) [params=...] So, LegalEntityId is not included in any of the statements. Is it possible to have relationship based on such referencing (to column other than primary key, without foreign key in database)? Is there something else missing? Thanks in advance.

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Route WCF ServiceHost to another computer

    - by I2nfo
    GoodDay, I'm not a guru when it comes to WCF, but i do know the basics. My question is, how do i create a ServiceHost on machine X, while the code is on machine Y? if i build and run this code on my dev machine(localhost) : servicehost = new ServiceHost(typeof(MyService1)); servicehost.AddServiceEndpoint(typeof(IMyService1), new NetTcpBinding(),"net.tcp://my.datacenter.com/MyApp/MyService1"); //This is normally set to localhost. What implementation must be done on the datacenter server, so that if i had to point to http://my.datacenter.com/MyApp/MyService1 , it will route the service operation to my dev machine (localhost). However, the datacenter should not be accessible via the internet. It is a possible infrastructure that we researching to see if we can create a service bus type architecture so that all our customers can invoke other customer services running on their respective machines just by calling our datacenter url. We have looked at Windows Azure, but we have our own datacenter infrasture that we wish to leverage off. Come think of it, we kind of building our own Azure, on a very very basic scale. How does one go creating this? Thanks in Advance

    Read the article

  • Android How to get position of selected item from gridview without using onclicklistner, using ontouchlistner instead

    - by zonemikel
    I have a gridview, I need to do stuff on motioneven.action_down and do something for motioneven.action_up ... using onclicklistener is great but does not give me this needed functionality. Is there anyway to easily call the gridview and get its selected item in a ontouchlistener ? I've been having limited success with making my own implementation. Its hard to get the right x,y because if i call the child it gives me the x and y relative to the child so a button would be 0,0 to 48,48 but it does not tell you the actual location on the screen relative to the gridview or the screen itself. this is what i've been doing, its partially working so far. Grid.setOnTouchListener(new OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN) { int x = (int)event.getX(); int y = (int)event.getY(); int position = 0; int childCount = Grid.getChildCount(); Message msg = new Message(); Rect ButtonRect = new Rect(); Grid.getChildAt(0).getDrawingRect(ButtonRect); int InitialLeft = ButtonRect.left + 10; ButtonRect.offsetTo(InitialLeft, ButtonRect.top); // while(position < childCount){ if(ButtonRect.contains(x,y)){break;} if(ButtonRect.right + ButtonRect.width() > Grid.getWidth()) { ButtonRect.offsetTo(InitialLeft, ButtonRect.bottom);} position++; ButtonRect.offsetTo(ButtonRect.right, ButtonRect.top); } msg.what = position; msg.arg1 = ButtonRect.bottom; msg.arg2 = y; cHandler.sendMessage(msg); }// end if action up if (event.getAction() == MotionEvent.ACTION_UP) { } return false; } });

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Why is Firefox prompting to download a file that is POST'd to?

    - by alex
    This is the most peculiar thing. It is from an old in house CMS. When I attempt to submit my changes, it prompts to save the file linked in the action attribute of the form. Headers Request POST /~site/edit/articles/article_save.php?id=54 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://example.com Content-Type: multipart/form-data; boundary=---------------------------10102754414578508781458777923 Content-Length: 940 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="title" Home Content -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="catid" 18 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="activecheck" 1 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="image" -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgToolbarSelectBlock" <p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="content" <p>Edit your article in this text box.</p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgEditor" true -----------------------------10102754414578508781458777923-- Response HTTP/0.9 200 OK And then Firefox shows.... I can't determine from the response headers as to why this is prompting to open/save. It has always worked. All other PHP files on the site work fine. Anyone have a clue? Thanks Update Apparently, it just crashes Safari.

    Read the article

  • iPhone Localization: simple project not working

    - by gonso
    Hello Im doing my first localized project and I've been fighting with it for several hours with no luck. I have to create an app that, based on the user selection, shows texts and images in different languages. I've read most of Apple's documents on the matter but I cant make a simple example work. This are my steps so far: 1) Create a new project. 2) Manually create a "en.lproj" directory in the projects folder. 3) Using TexEdit create file called "Localizable.strings" and store it in Unicode UTF-16. The file looks like this: /* Localizable.strings Multilanguage02 Created by Gonzalo Floria on 5/6/10. Copyright 2010 __MyCompanyName__. All rights reserved. */ "Hello" = "Hi"; "Goodbye" = "Bye"; 4) I drag this file to the Resources Folder on XCode and it appear with the "subdir" "en" underneath it (with the dropdown triangle to the left). If I try to see it on XCode it looks all wrong, whit lots of ? symbols, but Im guessing thats because its a UTF-16 file. Right? 5) Now on my view did load I can access this strings like this: NSString *translated; translated = NSLocalizedString(@"Hello", @"User greetings"); NSLog(@"Translated text is %@",translated); My problem is allowing the user to switch language. I have create an es.lproj with the Localizable.strings file (in Spanish), but I CANT access it. I've tried this line: [[NSUserDefaults standardUserDefaults] setObject: [NSArray arrayWithObjects:@"es", nil] forKey:@"AppleLanguages"]; But that only works the NEXT time you load the application. Is there no way to allow the user to switch languages while running the application?? Do I have to implement my own Dictionary files and forget all about NSLocalizableString family? Thanks for ANY advice or pointers. Gonso

    Read the article

  • Most Elegant Way to write isPrime in java

    - by Anantha Kumaran
    public class Prime { public static boolean isPrime1(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } for (int i = 2; i <= Math.sqrt(n) + 1; i++) { if (n % i == 0) { return false; } } return true; } public static boolean isPrime2(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } if (n % 2 == 0) { return false; } for (int i = 3; i <= Math.sqrt(n) + 1; i = i + 2) { if (n % i == 0) { return false; } } return true; } } public class PrimeTest { public PrimeTest() { } @Test public void testIsPrime() throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { Prime prime = new Prime(); TreeMap<Long, String> methodMap = new TreeMap<Long, String>(); for (Method method : Prime.class.getDeclaredMethods()) { long startTime = System.currentTimeMillis(); int primeCount = 0; for (int i = 0; i < 1000000; i++) { if ((Boolean) method.invoke(prime, i)) { primeCount++; } } long endTime = System.currentTimeMillis(); Assert.assertEquals(method.getName() + " failed ", 78498, primeCount); methodMap.put(endTime - startTime, method.getName()); } for (Entry<Long, String> entry : methodMap.entrySet()) { System.out.println(entry.getValue() + " " + entry.getKey() + " Milli seconds "); } } } I am trying to find the fastest way to check whether the given number is prime or not. This is what is finally came up with. Is there any better way than the second implementation(isPrime2).

    Read the article

  • FOSS ASP.Net Session Replication Solution?

    - by jsight
    I've been searching (with little success) for a free/opensource session clustering and replication solution for asp.net. I've run across the usual suspects (indexus sharedcache, memcached), however, each has some limitations. Indexus - Very immature, stubbed session interface implementation. Its otherwise a great caching solution, though. Memcached - Little replication/failover support without going to a db backend. Several SF.Net projects - All aborted in the early stages... nothing that appears to have any traction, and one which seems to have gone all commercial. Microsoft Velocity - Not OSS, but seems nice. Unfortunately, I didn't see where CTP1 supported failover, and there is no clear roadmap for this one. I fear that this one could fall off into the ether like many other MS dev projects. I am fairly used to the Java world where it is kind of taken for granted that many solutions to problems such as this will be available from the FOSS world. Are there any suitable alternatives available on the .Net world?

    Read the article

  • Best approach for Java/Maven/JPA/Hibernate build with multiple database vendor support?

    - by HDave
    I have an enterprise application that uses a single database, but the application needs to support mysql, oracle, and sql*server as installation options. To try to remain portable we are using JPA annotations with Hibernate as the implementation. We also have a test-bed instance of each database running for development. The app is building nicely in Maven, and I've played around with the hibernate3-maven-plugin and can auto-generate DDL for a given database dialect. What is the best way to approach this so that individual developers can easily test against all three databases and our Hudson based CI server can build things propertly. More specifically: 1) I thought the hbm2ddl goal in the hibernate3-maven-plugin would just generate a schema file, but apparently it connects to a live database and attempts to create the schema. Is there a way to have this just create the schema file for each database dialect without connecting to a database? 2) If the hibernate3-maven-plug insists on actually creating the database schema, is there a way to have it drop the database and recreate it before creating the schema? 3) I am thinking that each developer (and the hudson build machine) should have their own separate database on each database server. Is this typical? 4) Will developers have to run Maven three times...once for each database vendor? If so, how do I merge the results on the build machine? 5) There is a hbm2doc goal within hibernate3-maven-plugin. It seems overkill to run this three times...I gotta believe it'd be nearly identical for each database.

    Read the article

  • Why do Scala maps have poor performance relative to Java?

    - by Mike Hanafey
    I am working on a Scala app that consumes large amounts of CPU time, so performance matters. The prototype of the system was written in Python, and performance was unacceptable. The application does a lot with inserting and manipulating data in maps. Rex Kerr's Thyme was used to look at the performance of updating and retrieving data from maps. Basically "n" random Ints were stored in maps, and retrieved from the maps, with the time relative to java.util.HashMap used as a reference. The full results for a range of "n" are here. Sample (n=100,000) performance relative to java, smaller is worse: Update Read Mutable 16.06% 76.51% Immutable 31.30% 20.68% I do not understand why the scala immutable map beats the scala mutable map in update performance. Using the sizeHint on the mutable map does not help (it appears to be ignored in the tested implementation, 2.10.3). Even more surprisingly the immutable read performance is worse than the mutable read performance, more significantly so with larger maps. The update performance of the scala mutable map is surprisingly bad, relative to both scala immutable and plain Java. What is the explanation?

    Read the article

  • Naive Bayesian classification (spam filtering) - Doubt in one calculation? Which one is right? Plz c

    - by Microkernel
    Hi guys, I am implementing Naive Bayesian classifier for spam filtering. I have doubt on some calculation. Please clarify me what to do. Here is my question. In this method, you have to calculate P(S|W) - Probability that Message is spam given word W occurs in it. P(W|S) - Probability that word W occurs in a spam message. P(W|H) - Probability that word W occurs in a Ham message. So to calculate P(W|S), should I do (1) (Number of times W occuring in spam)/(total number of times W occurs in all the messages) OR (2) (Number of times word W occurs in Spam)/(Total number of words in the spam message) So, to calculate P(W|S), should I do (1) or (2)? (I thought it to be (2), but I am not sure, so plz clarify me) I am refering http://en.wikipedia.org/wiki/Bayesian_spam_filtering for the info by the way. I got to complete the implementation by this weekend :( Thanks and regards, MicroKernel :) @sth: Hmm... Shouldn't repeated occurrence of word 'W' increase a message's spam score? In the your approach it wouldn't, right?. Lets take a scenario and discuss... Lets say, we have 100 training messages, out of which 50 are spam and 50 are Ham. and say word_count of each message = 100. And lets say, in spam messages word W occurs 5 times in each message and word W occurs 1 time in Ham message. So total number of times W occuring in all the spam message = 5*50 = 250 times. And total number of times W occuring in all Ham messages = 1*50 = 50 times. Total occurance of W in all of the training messages = (250+50) = 300 times. So, in this scenario, how do u calculate P(W|S) and P(W|H) ? Naturally we should expect, P(W|S) P(W|H)??? right. Please share your thought...

    Read the article

  • Which languages support class replacement?

    - by Alix
    Hi, I'm writing my master thesis, which deals with AOP in .NET, among other things, and I mention the lack of support for replacing classes at load time as an important factor in the fact that there are currently no .NET AOP frameworks that perform true dynamic weaving -- not without imposing the requirement that woven classes must extend ContextBoundObject or MarshalByRefObject or expose all their semantics on an interface. You can however do this in Java thanks to ClassFileTransformer: You extend ClassFileTransformer. You subscribe to the class load event. On class load, you rewrite the class and replace it. All this is very well, but my project director has asked me, quite in the last minute, to give him a list of languages that do / do not support class replacement. I really have no time to look for this now: I wouldn't feel comfortable just doing a superficial research and potentially putting erroneous information in my thesis. So I ask you, oh almighty programming community, can you help out? Of course, I'm not asking you to research this yourselves. Simply, if you know for sure that a particular language supports / doesn't support this, leave it as an answer. If you're not sure please don't forget to point it out. Thanks so much!

    Read the article

  • Problems with display of UTF-8 encoded content from a DB

    - by LookUp Webmaster
    Dear members of the Stackoverflow community, We are developing a web application using the Zend Framework, and we are facing some encoding issues that we hope you might help us solve. The situation goes something like this: There are certain tables on a MySQL database that need to be displayed as html. Because the site is designed using the Spanish language, the database contains some characters like "á" or "ñ". Our internal policy is to set all the encodings as UTF-8, including all the databases and the tables. The problem is, that when we retrieve the content from the DB, some characters are displayed as question marks. We are out of ideas. These are all the things that we have already tried and double-checked: 1. The SQL file from which we load all the data is properly UTF-8 encoded. 2. The SQL is loaded through phpmyadmin (which is configured as UTF-8), and the resulting tables are displayed properly. 3. The netbeans environment used for coding is also set as UTF-8. The weird thing is that all the content that is hard-coded either as php or html is displayed properly. Only the values that are extracted from the database have issues. Any ideas? Thank you very much.

    Read the article

  • Java ternary operator and boxing Integer/int?

    - by Markus
    I tripped across a really strange NullPointerException the other day caused by an unexpected type-cast in the ternary operator. Given this (useless exemplary) function: Integer getNumber() { return null; } I was expecting the following two code segments to be exactly identical after compilation: Integer number; if (condition) { number = getNumber(); } else { number = 0; } vs. Integer number = (condition) ? getNumber() : 0; . Turns out, if condition is true, the if-statement works fine, while the ternary opration in the second code segment throws a NullPointerException. It seems as though the ternary operation has decided to type-cast both choices to int before auto-boxing the result back into an Integer!?! In fact, if I explicitly cast the 0 to Integer, the exception goes away. In other words: Integer number = (condition) ? getNumber() : 0; is not the same as: Integer number = (condition) ? getNumber() : (Integer) 0; . So, it seems that there is a byte-code difference between the ternary operator and an equivalent if-else-statement (something I didn't expect). Which raises three questions: Why is there a difference? Is this a bug in the ternary implementation or is there a reason for the type cast? Given there is a difference, is the ternary operation more or less performant than an equivalent if-statement (I know, the difference can't be huge, but still)?

    Read the article

  • Font advance calculation problem on Blackberry OS 5.0

    - by John
    I am currently working on my own implementation of a tab bar for a BlackBerry app, where each tab bar has a title that is right aligned (i.e. the last character in each should be the same distance from the right hand side of the screen). To work out where to draw the text I am using the following calculation: screen width - advance of title - indent. The font I am using is 'BBAlpha Sans' (height 28). Using BlackBerry OS 4.6 everything seems to be calculated properly and the text is aligned when I move between tabs, however I am finding that when I use OS 5.0 it doesn't calculate the advance properly and as a result the alignment is off by maybe 5 pixels or so. With the default font (also BBAlpha Sans, but height 24 - for OS 5.0 at least) it works fine in both versions.. but I don't necessarily always want to use the default font/size, so any ideas what could be going wrong? Is this a bug in the 5.0 API? Thanks. Code: public class TitleBarBackground extends Background { .. public void draw(Graphics graphics, XYRect rect) { graphics.pushRegion(rect); .. Font titleBarFont = FontFamily.forName("BBAlpha Sans").getFont(Font.PLAIN, 28); ... int textWidth = titleBarFont.getAdvance(title); graphics.drawText(title, rect.width - textWidth - TITLE_OFFSET, textYOffset); graphics.popContext(); } .. }

    Read the article

  • NSTimer as a timeout mechanism

    - by alexantd
    I'm pretty sure this is really simple, and I'm just missing something obvious. I have an app that needs to download data from a web service for display in a UITableView, and I want to display a UIAlertView if the operation takes more than X seconds to complete. So this is what I've got (simplified for brevity): MyViewController.h @interface MyViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { NSTimer *timer; } @property (nonatomic, retain) NSTimer *timer; MyViewController.m @implementation MyViewController @synthesize timer; - (void)viewDidLoad { timer = [NSTimer scheduledTimerWithTimeInterval:20 target:self selector:@selector(initializationTimedOut:) userInfo:nil repeats:NO]; [self doSomethingThatTakesALongTime]; [timer invalidate]; } - (void)doSomethingThatTakesALongTime { sleep(30); // for testing only // web service calls etc. go here } - (void)initializationTimedOut:(NSTimer *)theTimer { // show the alert view } My problem is that I'm expecting the [self doSomethingThatTakesALongTime] call to block while the timer keeps counting, and I'm thinking that if it finishes before the timer is done counting down, it will return control of the thread to viewDidLoad where [timer invalidate] will proceed to cancel the timer. Obviously my understanding of how timers/threads work is flawed here because the way the code is written, the timer never goes off. However, if I remove the [timer invalidate], it does.

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Cocos2d and MPMoviePlayerViewController - NSNotificationCenter not working

    - by digi_0315
    I'm using cocos2d with MPMoviePlayerViewController class, but when I tryed to catch notification status when the movie is finished I got this error: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSCFString movieFinishedCallback]: unrecognized selector sent to instance 0x5d23730' my playVideoController.m are: @implementation PlayVideoViewController +(id) scene{ CCScene *scene = [CCScene node]; CCLayer *layer = [credits node]; [scene addChild: layer]; return scene; } -(id)initWithPath:(NSString *)moviePath{ if ((self = [super init])){ movieURL = [NSURL fileURLWithPath:moviePath]; [movieURL retain]; playerViewController = [[MPMoviePlayerViewController alloc] initWithContentURL:movieURL]; player = [playerViewController moviePlayer]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(movieFinishedCallback) name:MPMoviePlayerPlaybackDidFinishNotification object:player]; [[[CCDirector sharedDirector] openGLView] addSubview:playerViewController.view]; [player play]; } return self; } -(void)movieFinishedCallback{ CCLOG(@"video finished!!"); } in .h: #import <UIKit/UIKit.h> #import "cocos2d.h" #import <MediaPlayer/MediaPlayer.h> @interface PlayVideoViewController : CCLayer { NSURL *movieURL; MPMoviePlayerViewController *playerViewController; MPMoviePlayerController *player; } +(id) scene; @end and I call it in appDelegate.m: - (void) applicationDidFinishLaunching:(UIApplication*)application { CC_DIRECTOR_INIT(); CCDirector *director = [CCDirector sharedDirector]; [director setDeviceOrientation:kCCDeviceOrientationLandscapeLeft]; EAGLView *glView = [director openGLView]; [glView setMultipleTouchEnabled:YES]; [CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888];//kEAGLColorFormatRGBA8 NSString *path = [[NSBundle mainBundle] pathForResource:@"intro" ofType:@"mov" inDirectory:nil]; vi ewController = [[[PlayVideoViewController alloc] initWithPath:path] autorelease]; } what i'm doing wrong? anyone can help me please?? I'm try to solve it since a lot of hours ago but I can't!

    Read the article

  • How do I remove implementing types from GWT’s Serialization Policy?

    - by Bluu
    The opposite of this question: http://stackoverflow.com/questions/138099/how-do-i-add-a-type-to-gwts-serialization-policy-whitelist GWT is adding undesired types to the serialization policy and bloating my JS. How do I trim my GWT whitelist by hand? Or should I at all? For example, if I put the interface List on a GWT RPC service class, GWT has to generate Javascript that handles ArrayList, LinkedList, Stack, Vector, ... even though my team knows we're only ever going to return an ArrayList. I could just make the method's return type ArrayList, but I like relying on an interface rather than a specific implementation. After all, maybe one day we will switch it up and return e.g. a LinkedList. In that case, I'd like to force the GWT serialization policy to compile for only ArrayList and LinkedList. No Stacks or Vectors. These implicit restrictions have one huge downside I can think of: a new member of the team starts returning Vectors, which will be a runtime error. So besides the question in the title, what is your experience designing around this?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • Which plugin framework to use for native C++/Win32

    - by Kerido
    Hi everybody. I have an extensible product that allows 3rd party developers to extend it. The aspects that can be extended are documented and interfaces are provided in the SDK. Currently, I'm using COM and I'm getting pretty comfortable with it. I especially like the ability to provide interface versioning in a unified manner. I consider it to be a requirement because you never know what you're gonna need in the future. Just to be precise, here's an example. Let's suppose I have an interface representing a particular feature: class IFeature { public: virtual void DoFeatureTask() = 0; }; Then after the interface is already documented (and someone may have used it in the plugin code) I'm realizing, I need more from this feature. Maybe, there is an option I need to provide. I just define the second version: class IFeature2 { public: virtual void DoFeatureTask(int theOption) = 0; }; I don't mean I intend to have lots of versions. But it just may happen. In COM, because every interface is associated with a GUID, I can query a preferred implementation, determine its presence, and, finally, fall back to a legacy one. But after glancing through C++/COM-related questions, I noticed many recommendations against COM. So maybe it's not the best choice and I'm just too old-school. Can you advise on an alternative?

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >