Search Results

Search found 41035 results on 1642 pages for 'object oriented design'.

Page 363/1642 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • Reasons to start a new project in COBOL

    - by luvieere
    Are there any feasible reasons to start a new project in COBOL? What benefits of this language one would find convincing enough to start a new project in it? I'm thinking more about viewing the language in terms of Behavior Driven Development, something related to the steps involved in using a framework such as Cucumber, only that behavior description and step definition would be integrated into one unit by using tha language's features.

    Read the article

  • Perl vs Python but with more style than normally

    - by user350571
    I'm learning perl and everytime I search for perl stuff in the internet I get some random page with people saying that perl should die because code written in it looks like a lesson in steganography. Then they say that python is clean and stuff like that. Now, I know that those comparisons are always stupid and made by fellows that feel that languages are a extension of their boring personality so, let me ask instead: can you give me the implementation of a widely known algorithm to deal with a data structure like red-black trees in both languages so I can compare?

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • Shortest Path algorithm of a different kind

    - by Ram Bhat
    Hey guys, Lets say you have a grid like this (made randomly) Now lets say you have a car starting randomly from one of the while boxes, what would be the shortest path to go through each one of the white boxes? you can visit each white box as many times as you want and cant Jump over the black boxes. The black boxes are like walls. In simple words you can move from white box to white box only.. You can move in any direction, even diagonally.

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • What open source database platform is most easily transferred from my personal machine into a window

    - by Tom
    I would like eventual interaction with MS Dynamics SL and/or MindTouch Core (running on WMware) for eventual intranet and/or internet display. I guess I am asking for front and back end recommendations for a database I am constructing, but since this is my first major project I would greatly appreciate any help and advice. I would also love an opportunity to learn a new language so the code base could be in any language. I do have a few more related questions for discussion; What is the viability of using Google hosting to provide the service to the public for free? Should I implement plone or another CMS if I have a large amount of output? Is there a structuring questionnaire or standards publication I could reference? Does UML diagramming provide additional options for portability? Thank you.

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Program to find canonical cover or minimum number of functional dependencies

    - by Sev
    I would like to know if there is a program or algorithm to find canonical cover or minimum number of functional dependencies? For example: If you have: R = (A,B,C) <-- these are tables: A,B,C And dependencies: A ? BC B ? C A ? B AB ? C The canonical cover (or minimum number of dependencies) is: A ? B B ? C Is there a program that can accomplish this? If not, any code/pseudocode to help me write one would be appreciated. Prefer in Python or Java.

    Read the article

  • split a database web application - good idea or bad idea?

    - by Khou
    Is it a bad idea to split up a application and the database? Application1 uses database1 on ServerX Application2 uses database2 on ServerY Both application communicates over web service API, they are apart of the same application, one application is used to manage user's profile/personal data, while the other application is used to manages user's financial data. Or should just put them together and just use 1 database on the same server?

    Read the article

  • Algorithm to determine which points should be visible on a map based on zoom

    - by lgratian
    Hi! I'm making a Google Maps-like application for a course at my Uni (not something complex, it should load the map of a city for example, not the whole world). The map can have many layers, including markers (restaurants, hospitals, etc.) The problem is that when you have many points and you zoom out the map it doesn't look right. At this zoom level only some points need to be visible (and at the maximum map size, all points). The question is: how can you determine which points should be visible for a specified zoom level? Because I have implemented a PR Quadtree to speed up rendering I thought that I could define some "high-priority" markers (that are always visible, defined in the map editor) and put them in a queue. At each step a marker is removed from the queue and all it's neighbors that are at least D units away (D depends on the zoom levels) are chosen and inserted in the queue, and so on. Is there any better way than the algorithm I thought of? Thanks in advance!

    Read the article

  • Providing multi-version databases for backward compatibility for production applications/databases.

    - by JavaRocky
    How can I manage multiple versions of a database easily? I have some data (as views as selects for data originating in tables from other schemas), which other database may reference using various means including database synonyms & links. I wish to provide a sort of interface/guarantee in-case future for applications/databases which use this data. All of this is for in the event i need to update the views for correctness or applicability inside my database. How can i achieve this in a maintained, controlled and easy way? I am using Oracle 10g if that matters.

    Read the article

  • Producer/consumer in Grails?

    - by Mulone
    Hi all, I'm trying to implement a Consumer/Producer app in Grails, after several unsuccessful attempts at implementing concurrent threads. Basically I want to store all the events coming from a clients (through separate AJAX calls) in a single queue and then process such a queue in a linear way as soon as new events are added. This looks like a Producer/Consumer problem: http://en.wikipedia.org/wiki/Producer-consumer_problem How can I implement this in Grails (maybe with a timer or even better by generating an event 'process queue')? Basically I'd like to a have a singleton service waiting for new events in the queue and processing them linearly (even if the queue is loaded by several concurrent processes). Any hints? Cheers!

    Read the article

  • Unit of Work Pattern in .Net

    - by Jazza
    Does anyone have any concrete examples of a simple Unit of Work pattern in C# or Visual Basic that would handle the following scenario? I'm writing a WinForms application in which a customer can have multiple addresses associated with it. The user can add, edit and delete addresses belonging to the customer before the customer is saved back to the database. Therefore, at the time of saving, all the original addresses need to be deleted from the database and the new addresses added in a single transaction.

    Read the article

  • Invoicing vs Quoting

    - by FreshCode
    If invoices can be voided, should they be used as quotations? I have an Invoices tables that is created from inventory associated with a Job. I could have a Quotes table as a halfway-house between inventory and invoices, but it feels like I would have duplicate data structures and logic just to handle an "Is this a quote?" bit. From a business perspective, quotes are different from invoices: a quote is sent prior to an undertaking and an invoice is sent once it is complete and payment is due, but how to represent this in my repository and model. What is an elegant way to store and manage quotes & invoices in a database?

    Read the article

  • Best Fit Scheduling Algorithim

    - by Teegijee
    I'm writing a scheduling program with a difficult programming problem. There are several events, each with multiple meeting times. I need to find an arrangement of meeting times such that each schedule contains any given event exactly once, using one of each event's multiple meeting times. Obviously I could use brute force, but that's rarely the best solution. I'm guessing this is a relatively basic computer science problem, which I'll learn about once I am able to start taking computer science classes. In the meantime, I'd prefer any links where I could read up on this, or even just a name I could Google.

    Read the article

  • Bejeweled Blitz - How does it assert there is always a move?

    - by EvilTeach
    I have been playing Bejeweled Blitz for a while now. Yes, it is an addiction. In thinking about the game, I have observed that on some boards, the bottom runs dry (no moves) leaving only the top part of the board playable. Frequently that part of the board drys up, and one is left with moves in area cleared by the last move. The board never runs completely dry, so clearly the program is doing some sorts of calculation that allows it to choose what to drop to prevent it from running dry. I have noticed in this 'mode' that it is very common for the algorithm to drop jewels which causes more non-dry area to appear in the horizontal area. Perhaps less frequent is a drop which seems designed to open up the bottom part of the board again. So my question is "How would one go about designing an algorithm guarantee that there is always a move available.?"

    Read the article

  • Evenly select N elems from array

    - by ninuhadida
    Hi, I need to evenly select n elements from an array. I guess the best way to explain is by example. say I have: array [0,1,2,3,4] and I need to select 3 numbers.. 0,3,4. of course, if the array length <= n, I just need to return the whole array. I'm pretty sure there's a defined algorithm for this, been trying to search, and I took a look at Introduction to algorithms but couldn't find anything that met my needs (probably overlooked it) The problem I'm having is that I can't figure out a way to scale this up to any array [ p..q ], selecting N evenly elements. note: I can't just select the even elements from the example above.. A couple other examples; array[0,1,2,3,4,5,6], 3 elements ; I need to get 0,3,6 array[0,1,2,3,4,5], 3 elements ; I need to get 0, either 2 or 3, and 5

    Read the article

  • Algorithm for max integer in an array of integers

    - by gagneet
    Explain which algorithm you would use to implement a function that takes an array of integers and returns the maximum integer in the collection, assuming that the length of the array is less than 1000. Would you use Bubble Sort or Merge Sort and Why? Also, what happens to the above algorithm choice, if the array length is greater than 1000?

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >