Search Results

Search found 22170 results on 887 pages for 'multiple schema'.

Page 783/887 | < Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • iphone: cross platform references and referencing external framework resources

    - by dan
    hi there working on an iphone app and separate framework. the separate framework is for an API that i'm building for use in multiple future apps. this api now needs to reference resources (images). what i would like to do is keep the resources WITH the API framework as local set of resources. i followed the instructions from http://www.clintharris.net/2009/iphone-app-shared-libraries/ to setup my app's project to use the headers from the separate API framework. what i can't seem to figure out is how to automatically load the framework's resources into the app's xcode environment so they can be linked in at app compile time. sure, i can drag the resources across from the framework into the main app's set of resources. but that seems kinda ugly and another step that possibly can be automated (??) anyone know of a better way? it would be great if any changes from the framework would be automatically available in the main app (due to the project 'link-age'). thanks for any help/tips/suggestions...

    Read the article

  • Set "Start With" value for Oracle sequence dynamically

    - by Allan
    I'm trying to create a release script that can be deployed on multiple databases, but where the data can be merged back together at a later date. The obvious way to handle this is to set the sequence numbers for production data sufficiently high in subsequent deployments to prevent collisions. The problem is in coming up with a release script that will accept the environment number and set the "Start With" value of the sequences appropriately. Ideally, I'd like to use something like this: ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] CREATE SEQUENCE seq1 START WITH &EnvironNum*100000; --[more scripting] This doesn't work because you can't evaluate a numeric expression in DDL. Another option is to create the sequences using dynamic SQL via PL/SQL. ACCEPT EnvironNum PROMPT 'Enter the Environment Number: ' --[more scripting] EXEC execute immediate 'CREATE SEQUENCE seq1 START WITH ' || &EnvironNum*100000; --[more scripting] However, I'd prefer to avoid this solution as I generally try to avoid issuing DDL in PL/SQL. Finally, the third option I've come up with is simply to accept the Start With value as a substitution variable, instead of the environment number. Does anyone have a better thought on how to go about this?

    Read the article

  • How to move point on arc?

    - by bbZ
    I am writing an app that is simulating RobotArm movement. What I want to achieve, and where I have couple of problems is moving a Point on arc (180degree) that is arm range of movement. I am moving an arm by grabbing with mouse end of arm (Elbow, the Point I was talking about), robot can have multiple arms with diffrent arm lenghts. If u can help me with this part, I'd be grateful. This is what I have so far, drawing function: public void draw(Graphics graph) { Graphics2D g2d = (Graphics2D) graph.create(); graph.setColor(color); graph.fillOval(location.x - 4, location.y - 4, point.width, point.height); //Draws elbow if (parentLocation != null) { graph.setColor(Color.black); graph.drawLine(location.x, location.y, parentLocation.x, parentLocation.y); //Connects to parent if (arc == null) { angle = new Double(90 - getAngle(parentInitLocation)); arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, 90 - getAngle(parentInitLocation), 180, Arc2D.OPEN); //Draws an arc if not drawed yet. } else if (angle != null) //if parent is moved, angle is moved also { arc = new Arc2D.Double(parentLocation.x - (parentDistance * 2 / 2), parentLocation.y - (parentDistance * 2 / 2), parentDistance * 2, parentDistance * 2, angle, 180, Arc2D.OPEN); } g2d.draw(arc); } if (spacePanel.getElbows().size() > level + 1) {//updates all childElbows position updateChild(graph); } } I just do not know how to prevent moving Point moving outside of arc line. It can not be inside or outside arc, just on it. Here I wanted to put a screenshot, sadly I don't have enough rep. Am I allowed to put link to this? Maybe you got other ways how to achieve this kind of thing. Here is the image: Red circle is actual state, and green one is what I want to do. EDIT2: As requested, repo link: https://github.com/dspoko/RobotArm

    Read the article

  • NHibernate listener/event to replace object before insert/update

    - by vIceBerg
    Hi! I have a Company class which have a collection of Address. Here's my Address class:(written top of my head): public class Address { public string Civic; public string Street; public City City; } This is the City class: public class City { public int Id; public string Name; public string SearchableName{ get { //code } } } Address is persisted in his own table and have a reference to the city's ID. City are also persisted in is own table. The City's SearchableName is used to prevent some mistakes in the city the user type. For example, if the city name is "Montréal", the searchable name will be "montreal". If the city name is "St. John", the searchable name will be "stjohn", etc. It's used mainly to to search and to prevent having multiple cities with a typo in it. When I save an address, I want an listener/event to check if the city is already in the database. If so, cancel the insert and replace the user's city with the database one. I would like the same behavior with updates. I tried this: public bool OnPreInsert(PreInsertEvent @event) { City entity = (@event.Entity as City); if (entity != null) { if (entity.Id == 0) { var city = (from c in @event.Session.Linq<City>() where c.SearchableName == entity.SearchableName select c).SingleOrDefault(); if (city != null) { //don't know what to do here return true; } } } return false; } But if there's already a City in the database, I don't know what to do. @event.Entity is readonly, if I set @event.Entity.Id, I get an "null identifier" exception. I tried to trap insert/update on Address and on Company, but the City if the first one to get inserted (it's logic...) Any thoughts? Thanks

    Read the article

  • Every flash uploader giving bad progress values.

    - by Mike Boers
    The file upload script I wrote early last year for an internal website has been misbehaving oddly on a number of machines. On some machines it consistently works fine, on others it consistently misbehaves. I am having exactly the same problem with YUI Uploader, SWFUpload (2.2 and 2.5a), and Uploadify. On the misbehaving machines, the progress event (or callback as the case may be) is reporting the upload going far too quickly. It is progressing around 9 or 10MB/s, instead of the 50 or 60kb/s that is actually going on. The progress bar fills up very quickly, and then no more progress events are triggered. A few minutes later the completion event will trigger when the upload is actually done. I must emphasize that the file upload does proceed normally, even though the progress being reported is very wrong. The progress events are reporting a correct file size, but the reported amount uploaded is usually way too high, and it appears that it is always a multiple of 2^16 (65536). I'm only having this problem with Firefox 3.5 on Windows XP, all of which have various subversions of Flash 10. Has anyone heard of this happening, or have any idea what is going on? (I'm off to go file a number of bug reports, but hopefully someone here has some previous experience with this.)

    Read the article

  • Workling processes multiplying uncontrolably

    - by adam
    Hello there. We have a rails app running on passenger and we background process some tasks using a combination of RabbitMQ and Workling. The workling's worker process is started using the script/workling_client command. There is always only one worker process started, and the script/workling_client has a :multiple => false options, thus allowing only one instance. But sometimes, under mysterious circumstances which I haven't been able to track down, more worklings spawn up. If I let the system run for some time, more and more worklings appear. I'm not sure if these rogue worklings cause any problems, but it is still unsettling not to know why is it happening. We are using Monit to monitor the workling process. So if it dies, it will spawn it up again. But this still does not explain how come there are suddenly more than one of them. So my question is: does anyone know what can be cause of this and how to make it stop? Is it possible that workling sometimes dies by itself, without deleting it's pid file? Could there be something wrong with the Daemons gem workling_client is build upon?

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

  • Same code, not the same place, different results

    - by Tom
    Hi! I'm trying to something pretty simple but I don't understand why the second bit of code is giving me a bad access error... First: - (UITableViewCell *)tableView:tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Tom: Converting the row int to a string and adding 1 to it so that the rows fit with the array indexes.) NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); cell.textLabel.text = [array objectAtIndex:1]; return cell; } That code gives me exactly what I want: 2010-03-18 15:18:12.884 PremierSoins[28005:40b] 1 2010-03-18 15:18:12.885 PremierSoins[28005:40b] ( 7, Blessures ) 2010-03-18 15:18:12.892 PremierSoins[28005:40b] 2 2010-03-18 15:18:12.893 PremierSoins[28005:40b] ( 18, "Test 2" ) But then, the second: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *key = [NSString stringWithFormat:@"%d", indexPath.row+1]; NSArray *array = [self.tableDataSource objectForKey:key]; NSLog(@"%@", key); NSLog(@"%@", [array description]); } Is only able to get me the key, but the description of the array makes it crash... tableDataSource is a NSMutableDictionary containing multiple arrays... Like this: { 1 = ( 7, Blessures ); 2 = ( 18, "Test 2" ); } Do you have any clue? I've been looking at this since yesterday... Thanks! Tom

    Read the article

  • How do you refactor a large messy codebase?

    - by Ricket
    I have a big mess of code. Admittedly, I wrote it myself - a year ago. It's not well commented but it's not very complicated either, so I can understand it -- just not well enough to know where to start as far as refactoring it. I violated every rule that I have read about over the past year. There are classes with multiple responsibilities, there are indirect accesses (I forget the technical term - something like foo.bar.doSomething()), and like I said it is not well commented. On top of that, it's the beginnings of a game, so the graphics is coupled with the data, or the places where I tried to decouple graphics and data, I made the data public in order for the graphics to be able to access the data it needs... It's a huge mess! Where do I start? How would you start on something like this? My current approach is to take variables and switch them to private and then refactor the pieces that break, but that doesn't seem to be enough. Please suggest other strategies for wading through this mess and turning it into something clean so that I can continue where I left off!

    Read the article

  • Interface vs Abstract Class (general OO)

    - by Kave
    Hi, I have had recently two telephone interviews where I've been asked about the differences between an Interface and an Abstract class. I have explained every aspect of them I could think of, but it seems they are waiting for me to mention something specific, and I dont know what it is. From my experience I think the following is true, if i am missing a major point please let me know: Interface: Every single Method declared in an Interface will have to be implemented in the subclass. Only Events, Delegates, Properties (C#) and Methods can exist in a Interface. A class can implement multiple Interfaces. Abstract Class Only Abstract methods have to be implemented by the subclass. An Abstract class can have normal methods with implementations. Abstract class can also have class variables beside Events, Delegates, Properties and Methods. A class can only implement one abstract class only due non-existence of Multi-inheritance in C#. 1) After all that the interviewer came up with the question What if you had an Abstract class with only abstract methods, how would that be different from an interface? I didnt know the answer but I think its the inheritance as mentioned above right? 2) An another interviewer asked me what if you had a Public variable inside the interface, how would that be different than in Abstract Class? I insisted you can't have a public variable inside an interface. I didn't know what he wanted to hear but he wasn't satisfied either. Many Thanks for clarification, Kave See Also: When to use an interface instead of an abstract class and vice versa Interfaces vs. Abstract Classes How do you decide between using an Abstract Class and an Interface?

    Read the article

  • Linked list example using threads

    - by Carl_1789
    I have read the following code of using CRITICAL_SECTION when working with multiple threads to grow a linked list. what would be the main() part which uses two threads to add to linked list? #include <windows.h> typedef struct _Node { struct _Node *next; int data; } Node; typedef struct _List { Node *head; CRITICAL_SECTION critical_sec; } List; List *CreateList() { List *pList = (List*)malloc(sizeof(pList)); pList->head = NULL; InitializeCriticalSection(&pList->critical_sec); return pList; } void AddHead(List *pList, Node *node) { EnterCriticalSection(&pList->critical_sec); node->next = pList->head; pList->head = node; LeaveCriticalSection(&pList->critical_sec); } void Insert(List *pList, Node *afterNode, Node *newNode) { EnterCriticalSection(&pList->critical_sec); if (afterNode == NULL) { AddHead(pList, newNode); } else { newNode->next = afterNode->next; afterNode->next = newNode; } LeaveCriticalSection(&pList->critical_sec); } Node *Next(List *pList, Node *node) { Node* next; EnterCriticalSection(&pList->critical_sec); next = node->next; LeaveCriticalSection(&pList->critical_sec); return next; }

    Read the article

  • Rails Browser Detection Methods

    - by alvincrespo
    Hey Everyone, I was wondering what methods are standard within the industry to do browser detection in Rails? Is there a gem, library or sample code somewhere that can help determine the browser and apply a class or id to the body element of the (X)HTML? Thanks, I'm just wondering what everyone uses and whether there is accepted method of doing this? I know that we can get the user.agent and parse that string, but I'm not sure if that is that is an acceptable way to do browser detection. Also, I'm not trying to debate feature detection here, I've read multiple answers for that on StackOverflow, all I'm asking for is what you guys have done. [UPDATE] So thanks to faunzy on GitHub, I've sort of understand a bit about checking the user agent in Rails, but still not sure if this is the best way to go about it in Rails 3. But here is what I've gotten so far: def users_browser user_agent = request.env['HTTP_USER_AGENT'].downcase @users_browser ||= begin if user_agent.index('msie') && !user_agent.index('opera') && !user_agent.index('webtv') 'ie'+user_agent[user_agent.index('msie')+5].chr elsif user_agent.index('gecko/') 'gecko' elsif user_agent.index('opera') 'opera' elsif user_agent.index('konqueror') 'konqueror' elsif user_agent.index('ipod') 'ipod' elsif user_agent.index('ipad') 'ipad' elsif user_agent.index('iphone') 'iphone' elsif user_agent.index('chrome/') 'chrome' elsif user_agent.index('applewebkit/') 'safari' elsif user_agent.index('googlebot/') 'googlebot' elsif user_agent.index('msnbot') 'msnbot' elsif user_agent.index('yahoo! slurp') 'yahoobot' #Everything thinks it's mozilla, so this goes last elsif user_agent.index('mozilla/') 'gecko' else 'unknown' end end return @users_browser end

    Read the article

  • mysql row locking via php

    - by deezee
    I am helping a friend with a web based form that is for their business. I am trying to get it ready to handle multiple users. I have set it up so that just before the record is displayed for editing I am locking the record with the following code. $query = "START TRANSACTION;"; mysql_query($query); $query = "SELECT field FROM table WHERE ID = \"$value\" FOR UPDATE;"; mysql_query($query); (okay that is greatly simplified but that is the essence of the mysql) It does not appear to be working. However, when I go directly to mysql from the command line, logging in with the same user and execute START TRANSACTION; SELECT field FROM table WHERE ID = "40" FOR UPDATE; I can effectively block the web form from accessing record "40" and get the timeout warning. I have tried using BEGIN instead of START TRANSACTION. I have tried doing SET AUTOCOMMIT=0 first and starting the transaction after locking but I cannot seem to lock the row from the PHP code. Since I can lock the row from the command line I do not think there is a problem with how the database is set up. I am really hoping that there is some simple something that I have missed in my reading. FYI, I am developing on XAMPP version 1.7.3 which has Apache 2.2.14, MySQL 5.1.41 and PHP 5.3.1. Thanks in advance. This is my first time posting but I have gleaned alot of knowledge from this site in the past.

    Read the article

  • Cryptography: best practices for keys in memory?

    - by Johan
    Background: I got some data encrypted with AES (ie symmetric crypto) in a database. A server side application, running on a (assumed) secure and isolated Linux box, uses this data. It reads the encrypted data from the DB, and writes back encrypted data, only dealing with the unencrypted data in memory. So, in order to do this, the app is required to have the key stored in memory. The question is, is there any good best practices for this? Securing the key in memory. A few ideas: Keeping it in unswappable memory (for linux: setting SHM_LOCK with shmctl(2)?) Splitting the key over multiple memory locations. Encrypting the key. With what, and how to keep the...key key.. secure? Loading the key from file each time its required (slow and if the evildoer can read our memory, he can probably read our files too) Some scenarios on why the key might leak: evildoer getting hold of mem dump/core dump; bad bounds checking in code leading to information leakage; The first one seems like a good and pretty simple thing to do, but how about the rest? Other ideas? Any standard specifications/best practices? Thanks for any input!

    Read the article

  • how to send value to the from action page from database

    - by Mayank swami
    I am creating a faq panel for there can be multiple answers for question and i want to take the answer id .because i am storing comment by answer id the problem is that how to sent the $answer_id to the comment_submit_process.php and how to recognize the answer ? $selected_ques= mysql_prep($_GET['ques']); $query = "SELECT * FROM formanswer where question_id = {$selected_ques}"; $ans= mysql_query($query); if($ans){ while($answer = mysql_fetch_array($ans)) //here is the form <form id="add-comment" action="comment_submit_process.php" > <textarea class="comment-submit-textarea" cols="78" name="comment" style="height: 64px;"></textarea> <input type="submit" name="submitbutton" value="Add Comment" class="comment-submit-button" > <br> <?php $ans_id= $answer['id']; echo $ans_id; ?> <input type="hidden" name="ques" value="<?php echo $_GET['$ans_id'] ?>" /> <span class="counter ">enter at least 15 characters</span> <span class="form-error"></span> </form> <?php }} ?>

    Read the article

  • Is there a pattern for initializing objects created wth a DI container

    - by Igor Zevaka
    I am trying to get Unity to manage the creation of my objects and I want to have some initialization parameters that are not known until run-time: At the moment the only way I could think of the way to do it is to have an Init method on the interface. interface IMyIntf { void Initialize(string runTimeParam); string RunTimeParam { get; } } Then to use it (in Unity) I would do this: var IMyIntf = unityContainer.Resolve<IMyIntf>(); IMyIntf.Initialize("somevalue"); In this scenario runTimeParam param is determined at run-time based on user input. The trivial case here simply returns the value of runTimeParam but in reality the parameter will be something like file name and initialize method will do something with the file. This creates a number of issues, namely that the Initialize method is available on the interface and can be called multiple times. Setting a flag in the implementation and throwing exception on repeated call to Initialize seems way clunky. At the point where I resolve my interface I don't want to know anything about the implementation of IMyIntf. What I do want, though, is the knowledge that this interface needs certain one time initialization parameters. Is there a way to somehow annotate(attributes?) the interface with this information and pass those to framework when the object is created? Edit: Described the interface a bit more.

    Read the article

  • Reverse mapping from a table to a model in SQLAlchemy

    - by Jace
    To provide an activity log in my SQLAlchemy-based app, I have a model like this: class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) activity_by_id = Column(Integer, ForeignKey('users.id'), nullable=False) activity_by = relation(User, primaryjoin=activity_by_id == User.id) activity_at = Column(DateTime, default=datetime.utcnow, nullable=False) activity_type = Column(SmallInteger, nullable=False) target_table = Column(Unicode(20), nullable=False) target_id = Column(Integer, nullable=False) target_title = Column(Unicode(255), nullable=False) The log contains entries for multiple tables, so I can't use ForeignKey relations. Log entries are made like this: doc = Document(name=u'mydoc', title=u'My Test Document', created_by=user, edited_by=user) session.add(doc) session.flush() # See note below log = ActivityLog(activity_by=user, activity_type=ACTIVITY_ADD, target_table=Document.__table__.name, target_id=doc.id, target_title=doc.title) session.add(log) This leaves me with three problems: I have to flush the session before my doc object gets an id. If I had used a ForeignKey column and a relation mapper, I could have simply called ActivityLog(target=doc) and let SQLAlchemy do the work. Is there any way to work around needing to flush by hand? The target_table parameter is too verbose. I suppose I could solve this with a target property setter in ActivityLog that automatically retrieves the table name and id from a given instance. Biggest of all, I'm not sure how to retrieve a model instance from the database. Given an ActivityLog instance log, calling self.session.query(log.target_table).get(log.target_id) does not work, as query() expects a model as parameter. One workaround appears to be to use polymorphism and derive all my models from a base model which ActivityLog recognises. Something like this: class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) title = Column(Unicode(255), nullable=False) edited_at = Column(DateTime, onupdate=datetime.utcnow, nullable=False) entity_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': entity_type} class Document(Entity): __tablename__ = 'documents' __mapper_args__ = {'polymorphic_identity': 'document'} body = Column(UnicodeText, nullable=False) class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) ... target_id = Column(Integer, ForeignKey('entities.id'), nullable=False) target = relation(Entity) If I do this, ActivityLog(...).target will give me a Document instance when it refers to a Document, but I'm not sure it's worth the overhead of having two tables for everything. Should I go ahead and do it this way?

    Read the article

  • NSPredicate cause update editing to return NSFetchedResultsChangeDelete not NSFetchedResultsChangeUp

    - by Matthew Weiss
    I have predicate inside of - (NSFetchedResultsController *)fetchedResultsController in a standard way starting from the CoreDataBook example. NSPredicate *predicate = [NSPredicate predicateWithFormat:@"state=%@ && date = %@ && date < %@", @"1",fromDate,toDate]; [fetchRequest setPredicate:predicate]; This works fine however when editing an item, it returns with NSFetchedResultsChangeDelete not Update. When the main view returns, it is missing the item. If I restart the simulator the delete was not saved and the correct editing result is shown the the predicate working correctly. case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; I can confirm the behavior by commenting out the two predicate lines ONLY and then all works as it should correctly returning with the full set after editing and calling NSFetchedResultsChangeUpdate instead of NSFetchedResultsChangeDelete. I have read http://matteocaldari.it/2009/11/multiple-contexts-controllers-delegates-and-coredata-bug who reports similar behavior but I have not found a work around to my problem. I can

    Read the article

  • How best to calculate derived currency rate conversions using C#/LINQ?

    - by chillitom
    class FxRate { string Base { get; set; } string Target { get; set; } double Rate { get; set; } } private IList<FxRate> rates = new List<FxRate> { new FxRate {Base = "EUR", Target = "USD", Rate = 1.3668}, new FxRate {Base = "GBP", Target = "USD", Rate = 1.5039}, new FxRate {Base = "USD", Target = "CHF", Rate = 1.0694}, new FxRate {Base = "CHF", Target = "SEK", Rate = 8.12} // ... }; Given a large yet incomplete list of exchange rates where all currencies appear at least once (either as a target or base currency): What algorithm would I use to be able to derive rates for exchanges that aren't directly listed? I'm looking for a general purpose algorithm of the form: public double Rate(string baseCode, string targetCode, double currency) { return ... } In the example above a derived rate would be GBP-CHF or EUR-SEK (which would require using the conversions for EUR-USD, USD-CHF, CHF-SEK) Whilst I know how to do the conversions by hand I'm looking for a tidy way (perhaps using LINQ) to perform these derived conversions perhaps involving multiple currency hops, what's the nicest way to go about this?

    Read the article

  • how to use a parameterized function for the Default Binding of a Sql Server column

    - by Walt Gaber
    I have a table that catalogs selected files from multiple sources. I want to record whether a file is a duplicate of a previously cataloged file at the time the new file is cataloged. I have a column in my table (“primary_duplicate”) to record each entry as ‘P’ (primary) or ‘D’ (duplicate). I would like to provide a Default Binding for this column that would check for other occurrences of this file (i.e. name, length, timestamp) at the time the new file is being recorded. I have created a function that performs this check (see “GetPrimaryDuplicate” below). But I don’t know how to bind this function which requires three parameters to the table’s “primary_duplicate” column as its Default Binding. I would like to avoid using a trigger. I currently have a stored procedure used to insert new records that performs this check. But I would like to ensure that the flag is set correctly if an insert is performed outside of this stored procedure. How can I call this function with values from the row that is being inserted? USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[FileCatalog]( [id] [uniqueidentifier] NOT NULL, [catalog_timestamp] [datetime] NOT NULL, [primary_duplicate] nchar NOT NULL, [name] nvarchar NULL, [length] [bigint] NULL, [timestamp] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_id] DEFAULT (newid()) FOR [id] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_catalog_timestamp] DEFAULT (getdate()) FOR [catalog_timestamp] GO ALTER TABLE [dbo].[FileCatalog] ADD CONSTRAINT [DF_FileCatalog_primary_duplicate] DEFAULT (N'GetPrimaryDuplicate(name, length, timestamp)') FOR [primary_duplicate] GO USE [MyDatabase] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[GetPrimaryDuplicate] ( @name nvarchar(255), @length bigint, @timestamp datetime ) RETURNS nchar(1) AS BEGIN DECLARE @c int SELECT @c = COUNT(*) FROM FileCatalog WHERE name=@name and length=@length and timestamp=@timestamp and primary_duplicate = 'P' IF @c > 0 RETURN 'D' -- Duplicate RETURN 'P' -- Primary END GO

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • HttpPostedFile.SaveAs() throws UnauthorizedAccessException even though the file is saved?

    - by jrummell
    I have an aspx page with multiple FileUpload controls and one Upload button. In the click handler I save the files like this: string path = "..."; for (int i = 0; i < Request.Files.Count - 1; i++) { HttpPostedFile file = Request.Files[i]; string fileName = Path.GetFileName(file.FileName); string saveAsPath = Path.Combine(path, fileName); file.SaveAs(saveAsPath); } When file.SaveAs() is called, it throws: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.UnauthorizedAccessException: Access to the path '...' is denied. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode) at System.Web.HttpPostedFile.SaveAs(String filename) at Belden.Web.Intranet.Iso.Complaints.AttachmentUploader.btnUpload_Click(Object sender, EventArgs e) at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.departments_iso_complaints_uploadfiles_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now here's the fun part. The file is saved correctly! So why is it throwing this exception?

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

< Previous Page | 779 780 781 782 783 784 785 786 787 788 789 790  | Next Page >