Search Results

Search found 56342 results on 2254 pages for 'object database'.

Page 486/2254 | < Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >

  • Rails - Scalable calculation model

    - by H O
    I currently have a calculation structure in my rails app that has models metric, operand and operation_type. Presently, the metric model has many operands, and can perform calculations based on the operation_type (e.g. sum, multiply, etc.), and each operand is defined as being right or left (i.e. so that if the operation is division, the numerator and denominator can be identified). Presently, an operand is always an attribute of some model, e.g. @customer.sales.selling_price.sum. In order to make this scalable, in need to allow an operand to be either an attribute of some kind, or the results of a previous operation, i.e. an operand can be a metric. I have included a diagram of how my models currently look: Can anyone assist me with the most elegant way of allowing an operand to be an actual operand, or another metric? Thanks! EDIT: It seems based on the only answer so far that perhaps polymorphic associations are the way to go on this, but the answer is so brief I have no idea how they could be used in this way - can anyone elaborate? EDIT 2: OK, I think I'm getting somewhere - essentially i presently have a metric, which has_many operands, and an operand has_many metrics. I need a polymorphic self join, where a metric can also have many metrics - do I need to call this something else, perhaps calculated_metrics, so that the metric model can use itself? That would leave me with a situation where a metric has_many operands, and a metric has many calculated_metrics.

    Read the article

  • What is the best design for these data base tables?

    - by Mohammed Jamal
    I need to find the best solution to make the DB Normalized with large amount of data expected. My site has a Table Tags (contain key word,id) and also 4 types of data related to this tags table like(articles,resources,jobs,...). The big question is:- for the relation with tags what best solution for optimazaion & query speed? make a table for each relation like: table articlesToTags(ArticleID,TagID) table jobsToTags(jobid,tagid) etc. or put it all in one table like table tagsrelation(tagid,itemid,itemtype) I need your help. Please provide me with articles to help me in this design consider that in future the site can conation new section relate to tag Thanks

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Reading directly from the Doctrine Searchable index table

    - by phidah
    I've got a Doctrine table with the Searchable behavior enabled. Whenever a record is created, an index is made in another table. I have a model called Entry and the behavior automatically created the table entry_index. My question now is: How can I - without using the search(...) methods of my model use the data from this table? I want to create a tag cloud of the words most used, and the data in the index table is exactly what I need.

    Read the article

  • mysql db connection

    - by Dragster
    hi there i have been searching the web for a connection between my android simulator and a mysql db. I've fount that you can't connect directly but via a webserver. The webserver wil handle my request from my android. I fount the following code on www.helloandroid.com But i don't understand. If i run this code on the simulator nothing happens. The screen stays black. Where does Log.i land. In the android screen or in the error log or somewhere else? Can somebody help me with this code? package app.android.ticket; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.apache.http.message.BasicNameValuePair; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.os.Bundle; import android.util.Log; public class fetchData extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); //call the method to run the data retreival getServerData(); } public static final String KEY_121 = "http://www.jorisdek.nl/android/getAllPeopleBornAfter.php"; public fetchData() { Log.e("fetchData", "Initialized ServerLink "); } private void getServerData() { InputStream is = null; String result = ""; //the year data to send ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("year","1980")); //http post try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(KEY_121); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } //convert response to string try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } //parse json data try{ JSONArray jArray = new JSONArray(result); for(int i=0;i<jArray.length();i++){ JSONObject json_data = jArray.getJSONObject(i); Log.i("log_tag","id: "+json_data.getInt("id")+ ", name: "+json_data.getString("name")+ ", sex: "+json_data.getInt("sex")+ ", birthyear: "+json_data.getInt("birthyear") ); } }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } } }

    Read the article

  • Higher speed options for executing very large (20 GB) .sql file in MySQL

    - by Jonogan
    My firm was delivered a 20+ GB .sql file in reponse to a request for data from the gov't. I don't have many options for getting the data in a different format, so I need options for how to import it in a reasonable amount of time. I'm running it on a high end server (Win 2008 64bit, MySQL 5.1) using Navicat's batch execution tool. It's been running for 14 hours and shows no signs of being near completion. Does anyone know of any higher speed options for such a transaction? Or is this what I should expect given the large file size? Thanks

    Read the article

  • Creating Two Cascading Foreign Keys Against Same Target Table/Col

    - by alram
    I have the following tables: user (userid int [pk], name varchar(50)) action (actionid int [pk], description nvarchar(50)) being referenced by another table that captures the relationship: <user1> <action>'s <user2>. I did this with the following table: userAction (userActionId int [pk], actionid int [fk: action.actionid], **userId1 int [fk ref's user.userid; on del/update cascade], userId2 int [fk ref's user.userid; on del/update cascade]**). However, when I try to save the userAction table i get an error because I have two cascading fk's against user.userid. Is there any way to remedy this or must I use a trigger?

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • Detecting changes between rows with same ID

    - by Noah
    I have a table containing some names and their associated ID, along with a snapshot: snapshot, id, name I need to identify when a name has changed for an id between snapshots. For example, in the following data: 1, 0, 'MOUSE_SPEED' 1, 1, 'MOUSE_POS' 1, 2, 'KEYBOARD_STATE' 2, 0, 'MOUSE_BUTTONS' 2, 1, 'MOUSE_POS' 2, 2, 'KEYBOARD_STATE' ...the meaning of id 0 changed with snapshot 2, but the others remained the same. I'd like to construct a query that (ideally) returns: 1, 0, 'MOUSE_SPEED' 2, 0, 'MOUSE_BUTTONS' I am using PostgreSQL v8.4.2.

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • SQL Server Concatinate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • querying larg text file containing JSON objects.

    - by Maciek Sawicki
    Hi, I have few Gigabytes text file in format: {"user_ip":"x.x.x.x", "action_type":"xxx", "action_data":{"some_key":"some_value"...},...} each entry is one line. First I would like to easily find entries for given ip. This part is easy because I can use grep for example. However even for this I would like to find better solution because I would like to get response as fast as possible. Next part is more complicated because I would like to find entries from selected ip and of selected type and with particular value of some_key in action_data. Probably I would have to convert this file to SQL db (probably SQLite, because it will be desktop APP), but I would ask if there are exists better solutions?

    Read the article

  • about null values!

    - by user329820
    Hi I have a question that if we declare a variable and then do not set it explicitly to null value then it would be null outomatically ,i mean that the below code will return true or false ? thanks DECLARE @val CHAR(4) If @val = NULL

    Read the article

  • Cursor returns zero rows from query to table

    - by brockoli
    I've created an SQLiteDatabase in my app and populated it with some data. I can connect to my AVD with a terminal and when I issue select * from articles; I get a list of all the rows in my table and everything looks fine. However, in my code when I query my table, I get a cursor back that has my tables columns, but zero rows of data. Here is my code.. mDbHelper.open(); Cursor articles = mDbHelper.fetchAllArticles(); startManagingCursor(articles); Cursor feeds = mDbHelper.fetchAllFeeds(); startManagingCursor(feeds); mDbHelper.close(); int titleColumn = articles.getColumnIndex("title"); int feedIdColumn = articles.getColumnIndex("feed_id"); int feedTitleColumn = feeds.getColumnIndex("title"); /* Check if our result was valid. */ if (articles != null) { int count = articles.getCount(); /* Check if at least one Result was returned. */ if (articles.moveToFirst()) { In the above code, my Cursor articles returns with my 4 columns, but when I call getCount() it returns zero, even though I can see hundreds of rows of data in that table from command line. Any idea what I might be doing wrong here? Also.. here is my code for fetchAllArticles.. public Cursor fetchAllArticles() { return mDb.query(ARTICLES_TABLE, new String[] {ARTICLE_KEY_ROWID, ARTICLE_KEY_FEED_ID, ARTICLE_KEY_TITLE, ARTICLE_KEY_URL}, null, null, null, null, null); } Rob W.

    Read the article

  • Join Where Rows Don't Exist or Where Criteria Matches...?

    - by Greg
    I'm trying to write a query to tell me which orders have valid promocodes. Promocodes are only valid between certain dates and optionally certain packages. I'm having trouble even explaining how this works (see psudo-ish code below) but basically if there are packages associated with a promocode then the order has to have one of those packages and be within a valid date range otherwise it just has to be in a valid date range. The whole "if PrmoPackage rows exist" thing is really throwing me off and I feel like I should be able to do this without a whole bunch of Unions. (I'm not even sure if that would make it easier at this point...) Anybody have any ideas for the query? if `OrderPromoCode` = `PromoCode` then if `OrderTimestamp` is between `PromoStartTimestamp` and `PromoEndTimestamp` then if `PromoCode` has packages associated with it //yes then if `PackageID` is one of the specified packages //yes code is valid //no invalid //no code is valid Order: OrderID* | OrderTimestamp | PackageID | OrderPromoCode 1 | 1/2/11 | 1 | ABC 2 | 1/3/11 | 2 | ABC 3 | 3/2/11 | 2 | DEF 4 | 4/2/11 | 3 | GHI Promo: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* ABC | 1/1/11 | 2/1/11 ABC | 3/1/11 | 4/1/11 DEF | 1/1/11 | 1/11/13 GHI | 1/1/11 | 1/11/13 PromoPackage: PromoCode* | PromoStartTimestamp* | PromoEndTimestamp* | PackageID* ABC | 1/1/11 | 2/1/11 | 1 ABC | 1/1/11 | 2/1/11 | 3 GHI | 1/1/11 | 1/11/13 | 1 Desired Result: OrderID | IsPromoCodeValid 1 | 1 2 | 0 3 | 1 4 | 0

    Read the article

  • how to model a follower stream in appengine?

    - by molicule
    I am trying to design tables to buildout a follower relationship. Say I have a stream of 140char records that have user, hashtag and other text. Users follow other users, and can also follow hashtags. I am outlining the way I've designed this below, but there are two limitaions in my design. I was wondering if others had smarter ways to accomplish the same goal. The issues with this are The list of followers is copied in for each record If a new follower is added or one removed, 'all' the records have to be updated. The code class HashtagFollowers(db.Model): """ This table contains the followers for each hashtag """ hashtag = db.StringProperty() followers = db.StringListProperty() class UserFollowers(db.Model): """ This table contains the followers for each user """ username = db.StringProperty() followers = db.StringListProperty() class stream(db.Model): """ This table contains the data stream """ username = db.StringProperty() hashtag = db.StringProperty() text = db.TextProperty() def save(self): """ On each save all the followers for each hashtag and user are added into a another table with this record as the parent """ super(stream, self).save() hfs = HashtagFollowers.all().filter("hashtag =", self.hashtag).fetch(10) for hf in hfs: sh = streamHashtags(parent=self, followers=hf.followers) sh.save() ufs = UserFollowers.all().filter("username =", self.username).fetch(10) for uf in ufs: uh = streamUsers(parent=self, followers=uf.followers) uh.save() class streamHashtags(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() class streamUsers(db.Model): """ The stream record is the parent of this record """ followers = db.StringListProperty() Now, to get the stream of followed hastags indexes = db.GqlQuery("""SELECT __key__ from streamHashtags where followers = 'myusername'""") keys = [k,parent() for k in indexes[offset:numresults]] return db.get(keys) Is there a smarter way to do this?

    Read the article

  • importing csv file into pgsql

    - by running4surival
    ok im trying to upload this csv file onto my table in pgsql but im getting this error ERROR: invalid input syntax for integer: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" CONTEXT: COPY members2, line 1, column id: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" i really understand why im getting this error, both my table and my csv file have the same column names

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • Select where and where not

    - by Simon
    I have a table containing lessons that I called "cours" (french) and I have several cours inside and I have linked them to students with a table between them to see if they go to the lessons or not. I would like to return data with the SELECT and the data that are NOT select. So, If one student follow 3 courses of 5, I would like to return the 3 courses that he follow and the 2 courses that he doesn't follow. Is there a way to do it ?

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

< Previous Page | 482 483 484 485 486 487 488 489 490 491 492 493  | Next Page >