Search Results

Search found 41035 results on 1642 pages for 'object oriented design'.

Page 372/1642 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Need alternative field names for these reserved words

    - by MattSlay
    “type” and “class” are likely reserved or problematic words in C# and/or Ruby, two languages I may use to program against my new database schema in the future. So, in order to avoid potential conflicts with those languages, I’m looking for alternative names for these field names in my tables. In this case, it is from my Machines table, where I have: “class” field (values would be something like “manual” or “computerized”) and “type” field (values would be “lathe” or “mill”) I could call the fields “machineclass” and “machinetype”, but that is inconsistent with naming scheme in the rest of my schema (meaning, I do not re-use the table name in the field… For instance, I use Machine.name, not Machine.machinename) Any thought on this madness?

    Read the article

  • Tables as relations in ER diagrams

    - by Richard Mar.
    Assume I have the following tables (**bold** - primary key, *italics* - foreign key): patient(**patient_id**, name) disease(**disease_id**, name) patient_disease(**p_d_id**, *patient_id*, *disease,_id* ) I want to draw the ER diagram for this. My idea is to make two entities, one for patient and one for disease, then make a n-to-n relation between them, with p_d_id as its attribute. Is that how it's supposed to be?

    Read the article

  • Conditional Styling In Silverlight?

    - by DeanMc
    Hi, While I'm fine with standard control styling in silverlight I have recently began using more dynamic methods of fetching data to be displayed in items controls. One of the controls I am reworking is a collection of links. The issue I am having is that each link is coloured differently when moused over. One red, one blue, one green, etc. Is there a way to style these items without sacrificing the dynamics of using an items control with a data template?

    Read the article

  • In symfony/doctrine's schema.yml, where should I put onDelete: CASCADE for a many-to-many relationsh

    - by nselikoff
    I have a many-to-many relationship defined in my Symfony (using doctrine) project between Orders and Upgrades (an Order can be associated with zero or more Upgrades, and an Upgrade can apply to zero or more Orders). # schema.yml Order: columns: order_id: {...} relations: Upgrades: class: Upgrade local: order_id foreign: upgrade_id refClass: OrderUpgrade Upgrade: columns: upgrade_id: {...} relations: Orders: class: Order local: upgrade_id foreign: order_id refClass: OrderUpgrade OrderUpgrade: columns: order_id: {...} upgrade_id: {...} I want to set up delete cascade behavior so that if I delete an Order or an Upgrade, all of the related OrderUpgrades are deleted. Where do I put onDelete: CASCADE? Usually I would put it at the end of the relations section, but that would seem to imply in this case that deleting Orders would cascade to delete Upgrades. Is Symfony + Doctrine smart enough to know what I'm wanting if I put onDelete: CASCADE in the above relations sections of schema.yml?

    Read the article

  • Fowler Analysis Patterns lately?

    - by Berryl
    As much as I've always loved this one is how much I always wished there were more meaty examples of how to apply some of the concepts available. Is anyone aware of anything out there worth looking at that attempts to that? Cheers, Berryl

    Read the article

  • How extensible should code actually be?

    - by griegs
    I've just started a new job and one of the things my new boss talked to me about was code longevity. I've always coded to make my code infinently extensible and adaptable. I figured that if someone was going to change my code in the future then it should be easy to do. But I never really had a clear idea on how far into the future that should be. So my new boss told me not to bother coding for anything more that 3 years into the future and his reasoning was that technology changes, programs expire etc. At first I was kinda taken aback and thought he was a whack job but the longer I think about it the more I'm warming to the concept. Does anyone else have an opinion on how far into the future you should code to?

    Read the article

  • Learning Modelling

    - by me1234
    Is there a good book which I can follow to learn modelling/doing architecture? Good samples? What would you do if you have to learn modelling from very basics? Thanks

    Read the article

  • When is it better to use a method versus a property for a class definition?

    - by ccomet
    Partially related to an earlier question of mine, I have a system in which I have to store complex data as a string. Instead of parsing these strings as all kinds of separate objects, I just created one class that contains all of those objects, and it has some parser logic that will encode all properties into strings, or decode a string to get those objects. That's all fine and good. This question is not about the parser itself, but about where I should house the logic for the parser. Is it a better choice to put it as a property, or as a method? In the case of a property, say public string DataAsString, the get accessor would house the logic to encode all of the data into a string, while the set accessor would decode the input value and set all of the data in the class instance. It seems convenient because the input/output is indeed a string. In the case of a method, one method would be Encode(), which returns the encoded string. Then, either the constructor itself would house the logic for the decoding a string and require the string argument, or I write a Decode(string str) method which is called separately. In either case, it would be using a method instead of a property. So, is there a functional difference between these paths, in terms of the actual running of the code? Or are they basically equivalent and it then boils down to a choice of personal preference or which one looks better? And in that kind of question... which would look cleaner anyway?

    Read the article

  • How do I save user specific data in an asp.net site?

    - by Greg McNulty
    I just set up user profiles using asp.net 3.5 using wvd. For each user I would like to store data that they will be updating every day. For example, every time they go for a run they will update time and distance. I intend to allow them to also look up their history of distance and time from any past date. My question is, what does the database schema usually look like for such a set up? Currently asp.net set up a db for me when I made user profiles. Do I just add an extra table for every user? Should there be one big table with all users data? How do I relate a user I'd to their specific data? Etc.... I have never done this before so any ideas on how this is usually done would be very helpful. Thank you.

    Read the article

  • In Ruby, can the coerce() method know what operator it is that requires the help to coerce?

    - by Jian Lin
    In Ruby, it seems that a lot of coerce() help can be done by def coerce(something) [self, something] end that's is, when 3 + rational is needed, Fixnum 3 doesn't know how to handle adding a Rational, so it asks Rational#coerce for help by calling rational.coerce(3), and this coerce instance method will tell the caller: # I know how to handle rational + something, so I will return you the following: [self, something] # so that now you can invoke + on me, and I will deal with Fixnum to get an answer So what if most operators can use this method, but not when it is (a - b) != (b - a) situation? Can coerce() know which operator it is, and just handle those special cases, while just using the simple [self, something] to handle all the other cases where (a op b) == (b op a) ? (op is the operator).

    Read the article

  • Given a trace of packets, how would you group them into flows?

    - by zxcvbnm
    I've tried it these ways so far: 1) Make a hash with the source IP/port and destination IP/port as keys. Each position in the hash is a list of packets. The hash is then saved in a file, with each flow separated by some special characters/line. Problem: Not enough memory for large traces. 2) Make a hash with the same key as above, but only keep in memory the file handles. Each packet is then put into the hash[key] that points to the right file. Problems: Too many flows/files (~200k) and it might run out of memory as well. 3) Hash the source IP/port and destination IP/port, then put the info inside a file. The difference between 2 and 3 is that here the files are opened and closed for each operation, so I don't have to worry about running out of memory because I opened too many at the same time. Problems: WAY too slow, same number of files as 2 so also impractical. 4) Make a hash of the source IP/port pairs and then iterate over the whole trace for each flow. Take the packets that are part of that flow and place them into the output file. Problem: Suppose I have a 60 MB trace that has 200k flows. This way, I would process, say, a 60 MB file 200k times. Maybe removing the packets as I iterate would make it not so painful, but so far I'm not sure this would be a good solution. 5) Split them by IP source/destination and then create a single file for each one, separating the flows by special characters. Still too many files (+50k). Right now I'm using Ruby to do it, which might've been a bad idea, I guess. Currently I've filtered the traces with tshark so that they only have relevant info, so I can't really make them any smaller. I thought about loading everything in memory as described in 1) using C#/Java/C++, but I was wondering if there wouldn't be a better approach here, especially since I might also run out of memory later on even with a more efficient language if I have to use larger traces. In summary, the problem I'm facing is that I either have too many files or that I run out of memory. I've also tried searching for some tool to filter the info, but I don't think there is one. The ones I've found only return some statistics and wouldn't scan for every flow as I need.

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Back Orders for ERP: data model references ?

    - by Patrick Honorez
    I have built an ERP using Sql Server as a back-end. These are the different types of Client documents (there are also Supplier Docs): Order -- impact: BO Delivery Note (also used for returns, with negative quantity) --impact: BO, Stock Invoice --impact: accounting only Credit Note --impact: accounting, BO I use a complex system of self joins (at detail level) to find out the quantities in each OrderDetail that still have a backorder (BO). It'd like to simplify this using a [group] field that could be used through all detail line related to an original order. There are many difficult things to trace: a Return of a product may be due to a defect and thus increase the BO, or it can be just a return, joined with a Credit Note, and then has no impact on BO. My question is: do you know of any real good reference (book, web) for this matter ?

    Read the article

  • Finding changes in MongoDB database

    - by Jonathan Knight
    I'm designing a MongoDB database that works with a script that periodically polls a resource and gets back a response which is stored in the database. Right now my database has one collection with four fields , id, name, timestamp and data. I need to be able to find out which names had changes in the data field between script runs, and which did not. In pseudocode, if(data[name][timestamp]==data[name][timestamp+1]) //data has not changed store data in collection 1 else //data has changed between script runs for this name store data in collection 2 Is there a query that can do this without iterating and running javascript over each item in the collection? There are millions of documents, so this would be pretty slow. Should I create a new collection named timestamp for every time the script runs? Would that make it faster/more organized? Is there a better schema that could be used? The script runs once a day so I won't run into a namespace limitation any time soon.

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • Updating multiple tables with LinqToSql in one unit of work

    - by zsharp
    Table 1: int ID-a(pk) Table 2: int ID-a(pk), int ID-b(pk) Table 3: int ID-b(pk), string C I have the data to insert into Table 1. But I do not have the ID-a, which is autogenerated. I have many string C to insert in Table 3. I am trying to insert row into Table 1, get the ID-a to insert in Table 2 along with the ID-b that is auto-Generated in Table 3 when I submit each string C, all in one submission to db. Right now I am calling dc.SubmitChanges twice in same call. Is it efficient to have to submit changes twice on same DataContext or can this be combined further?

    Read the article

  • Using Doctrine to abstract CRUD operations

    - by TomWilsonFL
    This has bothered me for quite a while, but now it is necessity that I find the answer. We are working on quite a large project using CodeIgniter plus Doctrine. Our application has a front end and also an admin area for the company to check/change/delete data. When we designed the front end, we simply consumed most of the Doctrine code right in the controller: //In semi-pseudocode function register() { $data = get_post_data(); if (count($data) && isValid($data)) { $U = new User(); $U->fromArray($data); $U->save(); $C = new Customer(); $C->fromArray($data); $C->user_id = $U->id; $C->save(); redirect_to_next_step(); } } Obviously when we went to do the admin views code duplication began and considering we were in a "get it DONE" mode so it now stinks with code bloat. I have moved a lot of functionality (business logic) into the model using model methods, but the basic CRUD does not fit there. I was going to attempt to place the CRUD into static methods, i.e. Customer::save($array) [would perform both insert and update depending on if prikey is present in array], Customer::delete($id), Customer::getObj($id = false) [if false, get all data]. This is going to become painful though for 32 model objects (and growing). Also, at times models need to interact (as the interaction above between user data and customer data), which can't be done in a static method without breaking encapsulation. I envision adding another layer to this (exposing web services), so knowing there are going to be 3 "controllers" at some point I need to encapsulate this CRUD somewhere (obviously), but are static methods the way to go, or is there another road? Your input is much appreciated.

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • SQL efficiency argument, add a column or solvable by query?

    - by theTurk
    I am a recent college graduate and a new hire for software development. Things have been a little slow lately so I was given a db task. My db skills are limited to pet projects with Rails and Django. So, I was a little surprised with my latest task. I have been asked by my manager to subclass Person with a 'Parent' table and add a reference to their custodian in the Person table. This is to facilitate going from Parent to Form when the custodian, not the Parent, is the FormContact. Here is a simplified, mock structure of a sql-db I am working with. I would have drawn the relationship tables if I had access to Visio. We have a table 'Person' and we have a table 'Form'. There is a table, 'FormContact', that relates a Person to a Form, not all Persons are related to a Form. There is a relationship table for Person to Person relationships (Employer, Parent, etc.) I've asked, "Why this couldn't be handled by a query?" Response, Inefficient. (Really!?!) So, I ask, "Why not have a reference to the Form? That would be more efficient since you wouldn't be querying the FormContacts table with the reference from child/custodian." Response, this would essentially make the Parent is a FormContact. (Fair enough.) I went ahead an wrote a query to get from non-FormContact Parent to Form, and tested on the production server. The response time was instantaneous. *SOME_VALUE* is the Parent's fk ID. SELECT FormID FROM FormContact WHERE FormContact.ContactID IN (SELECT SourceContactID FROM ContactRelationship WHERE (ContactRelationship.RelatedContactID = *SOME_VALUE*) AND (ContactRelationship.Relationship = 'Parent')); If I am right, "This is an unnecessary change." What should I do, defend my position or should I concede to the managers request? If I am wrong. What is my error? Is there a better solution than the manager's?

    Read the article

  • Haskell: "how much" of a type should functions receive? and avoiding complete "reconstruction"

    - by L01man
    I've got these data types: data PointPlus = PointPlus { coords :: Point , velocity :: Vector } deriving (Eq) data BodyGeo = BodyGeo { pointPlus :: PointPlus , size :: Point } deriving (Eq) data Body = Body { geo :: BodyGeo , pict :: Color } deriving (Eq) It's the base datatype for characters, enemies, objects, etc. in my game (well, I just have two rectangles as the player and the ground right now :p). When a key, the characters moves right, left or jumps by changing its velocity. Moving is done by adding the velocity to the coords. Currently, it's written as follows: move (PointPlus (x, y) (xi, yi)) = PointPlus (x + xi, y + yi) (xi, yi) I'm just taking the PointPlus part of my Body and not the entire Body, otherwise it would be: move (Body (BodyGeo (PointPlus (x, y) (xi, yi)) wh) col) = (Body (BodyGeo (PointPlus (x + xi, y + yi) (xi, yi)) wh) col) Is the first version of move better? Anyway, if move only changes PointPlus, there must be another function that calls it inside a new Body. I explain: there's a function update which is called to update the game state; it is passed the current game state, a single Body for now, and returns the updated Body. update (Body (BodyGeo (PointPlus xy (xi, yi)) wh) pict) = (Body (BodyGeo (move (PointPlus xy (xi, yi))) wh) pict) That tickles me. Everything is kept the same within Body except the PointPlus. Is there a way to avoid this complete "reconstruction" by hand? Like in: update body = backInBody $ move $ pointPlus body Without having to define backInBody, of course.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >