Search Results

Search found 38569 results on 1543 pages for 'database developer'.

Page 358/1543 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Detecting changes between rows with same ID

    - by Noah
    I have a table containing some names and their associated ID, along with a snapshot: snapshot, id, name I need to identify when a name has changed for an id between snapshots. For example, in the following data: 1, 0, 'MOUSE_SPEED' 1, 1, 'MOUSE_POS' 1, 2, 'KEYBOARD_STATE' 2, 0, 'MOUSE_BUTTONS' 2, 1, 'MOUSE_POS' 2, 2, 'KEYBOARD_STATE' ...the meaning of id 0 changed with snapshot 2, but the others remained the same. I'd like to construct a query that (ideally) returns: 1, 0, 'MOUSE_SPEED' 2, 0, 'MOUSE_BUTTONS' I am using PostgreSQL v8.4.2.

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • Reading directly from the Doctrine Searchable index table

    - by phidah
    I've got a Doctrine table with the Searchable behavior enabled. Whenever a record is created, an index is made in another table. I have a model called Entry and the behavior automatically created the table entry_index. My question now is: How can I - without using the search(...) methods of my model use the data from this table? I want to create a tag cloud of the words most used, and the data in the index table is exactly what I need.

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • mysql db connection

    - by Dragster
    hi there i have been searching the web for a connection between my android simulator and a mysql db. I've fount that you can't connect directly but via a webserver. The webserver wil handle my request from my android. I fount the following code on www.helloandroid.com But i don't understand. If i run this code on the simulator nothing happens. The screen stays black. Where does Log.i land. In the android screen or in the error log or somewhere else? Can somebody help me with this code? package app.android.ticket; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class fetchData extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //call the method to run the data retreival getServerData(); } public static final String KEY_121 = "http://www.jorisdek.nl/android/getAllPeopleBornAfter.php"; public fetchData() { Log.e("fetchData", "Initialized ServerLink "); } private void getServerData() { InputStream is = null; String result = ""; //the year data to send ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("year","1980")); //http post try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(KEY_121); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } //convert response to string try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } //parse json data try{ JSONArray jArray = new JSONArray(result); for(int i=0;i<jArray.length();i++){ JSONObject json_data = jArray.getJSONObject(i); Log.i("log_tag","id: "+json_data.getInt("id")+ ", name: "+json_data.getString("name")+ ", sex: "+json_data.getInt("sex")+ ", birthyear: "+json_data.getInt("birthyear") ); } }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } } }

    Read the article

  • What is the best design for these data base tables?

    - by Mohammed Jamal
    I need to find the best solution to make the DB Normalized with large amount of data expected. My site has a Table Tags (contain key word,id) and also 4 types of data related to this tags table like(articles,resources,jobs,...). The big question is:- for the relation with tags what best solution for optimazaion & query speed? make a table for each relation like: table articlesToTags(ArticleID,TagID) table jobsToTags(jobid,tagid) etc. or put it all in one table like table tagsrelation(tagid,itemid,itemtype) I need your help. Please provide me with articles to help me in this design consider that in future the site can conation new section relate to tag Thanks

    Read the article

  • Self-relation messes up contents in fetching

    - by holographix
    Hi folks, I'm dealing with an annoying problem in core data I've got a table named Character, which is made as follows I'm filling the table in various steps: 1) fill the attributes of the table 2) fill the Character Relation (charRel) FYI charRel is defined as follows I'm feeding the contents by pulling the data from an xml, the feeding code is this curStr = [[NSMutableString stringWithString:[curStr stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]] retain]; NSLog(@"Parsing relation within these keys %@, in order to get'em associated",curStr); NSArray *chunks = [curStr componentsSeparatedByString: @","]; for( NSString *relId in chunks ) { NSLog(@"Associating %@ with id %@",[currentCharacter valueForKey:@"character_id"], relId); NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", relId]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:[self managedObjectContext] ]]; [request setPredicate:predicate]; NSerror *error = nil; NSArray *results = [[self managedObjectContext] executeFetchRequest:request error:&error]; // error handling code if(error != nil) { NSLog(@"[SYMBOL CORRELATION]: retrieving correlated symbol error: %@", [error localizedDescription]); } else if([results count] > 0) { Character *relatedChar = [results objectAtIndex:0]; // grab the first result in the stack, could be done better! [currentCharacter addCharRelObject:relatedChar]; //VICE VERSA RELATIONS NSArray *charRels = [relatedChar valueForKey:@"charRel"]; BOOL alreadyRelated = NO; for(Character *charRel in charRels) { if([[charRel valueForKey:@"character_id"] isEqual:[currentCharacter valueForKey:@"character_id"]]) { alreadyRelated = YES; break; } } if(!alreadyRelated) { NSLog(@"\n\t\trelating %@ with %@", [relatedChar valueForKey:@"character_id"], [currentCharacter valueForKey:@"character_id"]); [relatedChar addCharRelObject:currentCharacter]; } } else { NSLog(@"[SYMBOL CORRELATION]: related symbol was not found! ##SKIPPING-->"); } [request release]; } NSLog(@"\t\t### TOTAL OF REALTIONS FOR ID %@: %d\n%@", [currentCharacter valueForKey:@"character_id"], [[currentCharacter valueForKey:@"charRel"] count], currentCharacter); error = nil; /* SAVE THE CONTEXT */ if (![managedObjectContext save:&error]) { NSLog(@"Whoops, couldn't save the symbol record: %@", [error localizedDescription]); NSArray* detailedErrors = [[error userInfo] objectForKey:NSDetailedErrorsKey]; if(detailedErrors != nil && [detailedErrors count] > 0) { for(NSError* detailedError in detailedErrors) { NSLog(@"\n################\t\tDetailedError: %@\n################", [detailedError userInfo]); } } else { NSLog(@" %@", [error userInfo]); } } at this point when I print out the values of the currentCharacter, everything looks perfect. every relation is in its place. in example in this log we can clearly see that this element has got 3 items in charRel: <Character: 0x5593af0> (entity: Character; id: 0x55938c0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p86> ; data: { catRel = "<relationship fault: 0x9a29db0 'catRel'>"; charRel = ( "0x9a1f870 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p74>", "0x9a14bd0 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p109>", "0x558ba00 <x-coredata://67288D50-D349-4B19-B7CB-F7AC4671AD61/Character/p5>" ); "character_id" = 254; examplesRel = "<relationship fault: 0x9a29df0 'examplesRel'>"; meaning = "\n Left"; pinyin = "\n zu\U01d2"; "pronunciation_it" = "\n zu\U01d2"; strokenumber = 5; text = "\n \n <p>The most ancient form of this symbol"; unicodevalue = "\n \U5de6"; }) then when I'm in need of retrieving this item I perform an extraction, like this: // at first I get the single Character record NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSError *error; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"character_id == %@", self.char_id ]; [request setEntity:[NSEntityDescription entityForName:@"Character" inManagedObjectContext:_context ]]; [request setPredicate:predicate]; NSArray *fetchedObjs = [_context executeFetchRequest:request error:&error]; when, for instance, I print out in NSLog the contents of charRel NSArray *correlations = [singleCharacter valueForKey:@"charRel"]; NSLog(@"CHARACTER OBJECT \n%@", correlations); I get this Relationship fault for (<NSRelationshipDescription: 0x5568520>), name charRel, isOptional 1, isTransient 0, entity Character, renamingIdentifier charRel, validation predicates (), warnings (), versionHashModifier (null), destination entity Character, inverseRelationship (null), minCount 1, maxCount 99 on 0x6937f00 hope that I made myself clear. this thing is driving me insane, I've googled all over world, but I couldn't find a solution (and this make me think to as issue related to bad coding somehow :P). thank you in advance guys. k

    Read the article

  • Rails - Scalable calculation model

    - by H O
    I currently have a calculation structure in my rails app that has models metric, operand and operation_type. Presently, the metric model has many operands, and can perform calculations based on the operation_type (e.g. sum, multiply, etc.), and each operand is defined as being right or left (i.e. so that if the operation is division, the numerator and denominator can be identified). Presently, an operand is always an attribute of some model, e.g. @customer.sales.selling_price.sum. In order to make this scalable, in need to allow an operand to be either an attribute of some kind, or the results of a previous operation, i.e. an operand can be a metric. I have included a diagram of how my models currently look: Can anyone assist me with the most elegant way of allowing an operand to be an actual operand, or another metric? Thanks! EDIT: It seems based on the only answer so far that perhaps polymorphic associations are the way to go on this, but the answer is so brief I have no idea how they could be used in this way - can anyone elaborate? EDIT 2: OK, I think I'm getting somewhere - essentially i presently have a metric, which has_many operands, and an operand has_many metrics. I need a polymorphic self join, where a metric can also have many metrics - do I need to call this something else, perhaps calculated_metrics, so that the metric model can use itself? That would leave me with a situation where a metric has_many operands, and a metric has many calculated_metrics.

    Read the article

  • SQL Server Concatinate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • about null values!

    - by user329820
    Hi I have a question that if we declare a variable and then do not set it explicitly to null value then it would be null outomatically ,i mean that the below code will return true or false ? thanks DECLARE @val CHAR(4) If @val = NULL

    Read the article

  • Creating Two Cascading Foreign Keys Against Same Target Table/Col

    - by alram
    I have the following tables: user (userid int [pk], name varchar(50)) action (actionid int [pk], description nvarchar(50)) being referenced by another table that captures the relationship: <user1> <action>'s <user2>. I did this with the following table: userAction (userActionId int [pk], actionid int [fk: action.actionid], **userId1 int [fk ref's user.userid; on del/update cascade], userId2 int [fk ref's user.userid; on del/update cascade]**). However, when I try to save the userAction table i get an error because I have two cascading fk's against user.userid. Is there any way to remedy this or must I use a trigger?

    Read the article

  • Cursor returns zero rows from query to table

    - by brockoli
    I've created an SQLiteDatabase in my app and populated it with some data. I can connect to my AVD with a terminal and when I issue select * from articles; I get a list of all the rows in my table and everything looks fine. However, in my code when I query my table, I get a cursor back that has my tables columns, but zero rows of data. Here is my code.. mDbHelper.open(); Cursor articles = mDbHelper.fetchAllArticles(); startManagingCursor(articles); Cursor feeds = mDbHelper.fetchAllFeeds(); startManagingCursor(feeds); mDbHelper.close(); int titleColumn = articles.getColumnIndex("title"); int feedIdColumn = articles.getColumnIndex("feed_id"); int feedTitleColumn = feeds.getColumnIndex("title"); /* Check if our result was valid. */ if (articles != null) { int count = articles.getCount(); /* Check if at least one Result was returned. */ if (articles.moveToFirst()) { In the above code, my Cursor articles returns with my 4 columns, but when I call getCount() it returns zero, even though I can see hundreds of rows of data in that table from command line. Any idea what I might be doing wrong here? Also.. here is my code for fetchAllArticles.. public Cursor fetchAllArticles() { return mDb.query(ARTICLES_TABLE, new String[] {ARTICLE_KEY_ROWID, ARTICLE_KEY_FEED_ID, ARTICLE_KEY_TITLE, ARTICLE_KEY_URL}, null, null, null, null, null); } Rob W.

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • importing csv file into pgsql

    - by running4surival
    ok im trying to upload this csv file onto my table in pgsql but im getting this error ERROR: invalid input syntax for integer: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" CONTEXT: COPY members2, line 1, column id: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" i really understand why im getting this error, both my table and my csv file have the same column names

    Read the article

  • Hierarchical Hibernate, how many queries are executed?

    - by ghost1
    So I've been dealing with a home brew DB framework that has some seriously flaws, the justification for use being that not using an ORM will save on the number of queries executed. If I'm selecting all possibile records from the top level of a joinable object hierarchy, how many separate calls to the DB will be made when using an ORM (such as Hibernate)? I feel like calling bullshit on this, as joinable entities should be brought down in one query , right? Am I missing something here? note: lazy initialization doesn't matter in this scenario as all records will be used.

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • Join Where Rows Don't Exist or Where Criteria Matches...?

    - by Greg
    I'm trying to write a query to tell me which orders have valid promocodes. Promocodes are only valid between certain dates and optionally certain packages. I'm having trouble even explaining how this works (see psudo-ish code below) but basically if there are packages associated with a promocode then the order has to have one of those packages and be within a valid date range otherwise it just has to be in a valid date range. The whole "if PrmoPackage rows exist" thing is really throwing me off and I feel like I should be able to do this without a whole bunch of Unions. (I'm not even sure if that would make it easier at this point...) Anybody have any ideas for the query? if `OrderPromoCode` = `PromoCode` then if `OrderTimestamp` is between `PromoStartTimestamp` and `PromoEndTimestamp` then if `PromoCode` has packages associated with it //yes then if `PackageID` is one of the specified packages //yes code is valid //no invalid //no code is valid Order: OrderID* | OrderTimestamp | PackageID | OrderPromoCode 1 | 1/2/11 | 1 | ABC 2 | 1/3/11 | 2 | ABC 3 | 3/2/11 | 2 | DEF 4 | 4/2/11 | 3 | GHI Promo: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* ABC | 1/1/11 | 2/1/11 ABC | 3/1/11 | 4/1/11 DEF | 1/1/11 | 1/11/13 GHI | 1/1/11 | 1/11/13 PromoPackage: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* | PackageID* ABC | 1/1/11 | 2/1/11 | 1 ABC | 1/1/11 | 2/1/11 | 3 GHI | 1/1/11 | 1/11/13 | 1 Desired Result: OrderID | IsPromoCodeValid 1 | 1 2 | 0 3 | 1 4 | 0

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >