Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 364/1646 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Foreign Key Relationships and "belongs to many"

    - by jan
    I have the following model: S belongs to T T has many S A,B,C,D,E (etc) have 1 T each, so the T should belong to each of A,B,C,D,E (etc) At first I set up my foreign keys so that in A, fk_a_t would be the foreign key on A.t to T(id), in B it'd be fk_b_t, etc. Everything looks fine in my UML (using MySQLWorkBench), but generating the yii models results in it thinking that T has many A,B,C,D (etc) which to me is the reverse. It sounds to me like either I need to have A_T, B_T, C_T (etc) tables, but this would be a pain as there are a lot of tables that have this relationship. I've also googled that the better way to do this would be some sort of behavior, such that A,B,C,D (etc) can behave as a T, but I'm not clear on exactly how to do this (I will continue to google more on this) What do you think is the better solution? UML: Here's the DDL (auto generated). Just pretend that there is more than 3 tables referencing T. -- ----------------------------------------------------- -- Table `mydb`.`T` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`T` ( `id` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`S` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`S` ( `id` INT NOT NULL AUTO_INCREMENT , `thing` VARCHAR(45) NULL , `t` INT NOT NULL , PRIMARY KEY (`id`) , INDEX `fk_S_T` (`id` ASC) , CONSTRAINT `fk_S_T` FOREIGN KEY (`id` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`A` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`A` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff` VARCHAR(45) NULL , `bar` VARCHAR(45) NULL , `foo` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`B` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`B` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff2` VARCHAR(45) NULL , `foobar` VARCHAR(45) NULL , `other` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`C` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`C` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff3` VARCHAR(45) NULL , `foobar2` VARCHAR(45) NULL , `other4` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB;

    Read the article

  • Firebird multiple statements

    - by Aldo
    Hello, is there any way to execute multiple statements (none of which will have to return anything) on Firebird? Like importing a SQL file and executing it. I've been looking for a while and couldn't find anything for this.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Solr; "rookie" question

    - by Camran
    I have a SolrPhpClient on my classifieds website, and whenever users wants to add/remove classified the index in Solr gets updated via Php code. So I wonder, does this mean that my Solr index is open for anybody to alter with? Same Q applies to the Solr Admin page. If I set a password for the admin page, does this mean that my classifieds website wont have access to updating/removing documents from the Solr index? Thanks

    Read the article

  • Need some serious help with self join issue.

    - by kralco626
    Well as you may know, you cannot index a view with a self join. Well actually even two joins of the same table, even if it's not technically a self join. A couple of guys from microsoft came up with a work around. But it's so complicated I don't understand it!!! The solution to the problem is here: http://jmkehayias.blogspot.com/2008/12/creating-indexed-view-with-self-join.html The view I want to apply this work around to is: create VIEW vw_lookup_test WITH SCHEMABINDING AS select count_big(*) as [count_all], awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm [owner], t.circt_cstdn_nm [tech], dvc.circt_nm, data_orgtn_yr from ((dbo.dvc join dbo.circt on dvc.circt_nm = circt.circt_nm) join dbo.circt_cstdn o on circt.circt_cstdn_user_id = o.circt_cstdn_user_id) join dbo.circt_cstdn t on dvc.circt_cstdn_user_id = t.circt_cstdn_user_id group by awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm, t.circt_cstdn_nm, dvc.circt_nm, data_orgtn_yr go Any help would be greatly apreciated!!! Thanks so much in advance!

    Read the article

  • Migrating a Large amount of data from old publishing site to new site

    - by tommizzle
    Hi, I am currently in the process of creating a new news/publishing site on the Movable Type platform. There are around 20 or so sites with 20,000+ rows of data to be moved/aggregated to ~8 sites (we have a number of location specific sites and are going to aggregate the content from these into 1 single site for each niche). We have discussed how to do this and came to the conclusion that it would probably be better to hire somebody to do it (I could probably do it, but i'm limited on time and am sure that a specialist would be more efficient). So my questions to you guys are: 1) What kind of skill set should we look for in an applicant? 2) There will be a large amount of input from our side... is getting somebody to work remotely out of the question? 3) How long would a task like this traditionally take (I know this question is very subjective, but an estimation would be awesome)? 4) Do you have any recommendations for firms who would be able to take on a large task like this? Thanks in advance, Tom

    Read the article

  • Many-To-Many dimensional model

    - by Mevdiven
    Folks, I have a dimension table called DIM_FILE which holds information of the files we received from customers. Each file has detail records which constitutes my FACT table, CUST_DETAIL. In the main process, file is gone through several stages and each stage tags a status to it. Long in a short, I have many-to-many relationship. Any ideas around star schema dimensional modeling. A customer record only belong to a single file and a file can have multiple statuses. FACT ---- CustID FileID AmountDue DIM_FILE -------- FileID FileName DateReceived FILE_STATUS ----------- FileID StatusDateTime StatusCode

    Read the article

  • Natural vs surrogate keys on support tables

    - by Bugeo
    I have read many articles about the battle between natural versus surrogate primary keys. I agree in the use of surrogate keys to identify records of tables whose contents are created by the user. But in the case of supporting tables what should I use? For example, in a hypothetical table "orderStates". If you use a natural key would have the following data: TABLE ORDERSTATES {ID: "NEW", NAME: "New"} {ID: "MANAGEMENT" NAME: "Management"} {ID: "SHIPPED" NAME: "Shipped"} If I use a surrogate key would have the following data: TABLE ORDERSTATES {ID: 1 CODE: "NEW", NAME: "New"} {ID: 2 CODE: "MANAGEMENT" NAME: "Management"} {ID: 3 CODE: "SHIPPED" NAME: "Shipped"} Now let's take an example: a user enters a new order. In the case in which use natural keys, in the code I can write this: newOrder.StateOrderId = "NEW"; With the surrogate keys instead every time I have an additional step. stateOrderId_NEW = .... I retrieve the id corresponding to the recod code "NEW" newOrder.StateOrderId = stateOrderId_NEW; The same will happen every time I have to move the order in a new status. So, in this case, what are the reason to chose one key type vs the other one?

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • Alternative design for a synonyms table?

    - by Majid
    I am working on an app which is to suggest alternative words/phrases for input text. I have doubts about what might be a good design for the synonyms table. Design considerations: number of synonyms is variable, i.e. football has one synonym (soccer), but in particular has two (particularly, specifically) if football is a synonym to soccer, the relation exists in the opposite direction as well. our goal is to query a word and find its synonyms we want to keep the table small and make adding new words easy What comes to my mind is a two column design with col a = word and col b = delimited list of synonyms Is there any better alternative? What about using two tables, one for words and the other for relations?

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by marmalade
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • How do these user/userParam references relate to the Customer and Account lookups?

    - by plath
    In the following code example how do the user/userParam references relate to the Customer and Account lookups and what is the relationship between Customer and Account? // PersistenceManager pm = ...; Transaction tx = pm.currentTransaction(); User user = userService.currentUser(); List<Account> accounts = new ArrayList<Account>(); try { tx.begin(); Query query = pm.newQuery("select from Customer " + "where user == userParam " + "parameters User userParam"); List<Customer> customers = (List<Customer>) query.execute(user); query = pm.newQuery("select from Account " + "where parent-pk == keyParam " + "parameters Key keyParam"); for (Customer customer : customers) { accounts.addAll((List<Account>) query.execute(customer.key)); } } finally { if (tx.isActive()) { tx.rollback(); } }

    Read the article

  • How can I remove rows with unique values? As in only keeping rows with duplicate values?

    - by user1456405
    Here's the conundrum, I'm a complete and utter noob when it comes to programming. I understand the basics, but am still learning javascript. I have a spreadsheet of surveys, in which I need to see how particular users have varied over time. As such, I need to disregard all rows with unique values in a particular column. The data looks like this: Response Date Response_ID Account_ID Q.1 10/20/2011 12:03:43 PM 23655956 1168161 8 10/20/2011 03:52:57 PM 23660161 1168152 0 10/21/2011 10:55:54 AM 23672903 1166121 7 10/23/2011 04:28:16 PM 23694471 1144756 9 10/25/2011 06:30:52 AM 23732674 1167449 7 10/25/2011 07:52:28 AM 23734597 1087618 5 I've found a way to do so in VBA, which sucks as I have to use excel, per below: Sub Del_Unique() Application.ScreenUpdating = False Columns("B:B").Insert Shift:=xlToRight Columns("A:A").Copy Destination:=Columns("B:B") i = Application.CountIf(Range("A:A"), "<>") + 50 If i > 65536 Then i = 65536 Do If Application.CountIf(Range("B:B"), Range("A" & i)) = 1 Then Rows(i).Delete End If i = i - 1 Loop Until i = 0 Columns("B:B").Delete Application.ScreenUpdating = True End Sub But that requires mucking about. I'd really like to do it in Google Spreadsheets with a script that won't have to be changed. Closest I can get is retrieving all duplicate user ids from the range, but can't associate that with the row. That code follows: function findDuplicatesInSelection() { var activeRange = SpreadsheetApp.getActiveRange(); var values = activeRange.getValues(); // values that appear at least once var once = {}; // values that appear at least twice var twice = {}; // values that appear at least twice, stored in a pretty fashion! var final = []; for (var i = 0; i < values.length; i++) { var inner = values[i]; for (var j = 0; j < inner.length; j++) { var cell = inner[j]; if (cell == "") continue; if (once.hasOwnProperty(cell)) { if (!twice.hasOwnProperty(cell)) { final.push(cell); } twice[cell] = 1; } else { once[cell] = 1; } } } if (final.length == 0) { Browser.msgBox("No duplicates found"); } else { Browser.msgBox("Duplicates are: " + final); } } Anyhow, sorry if this is the wrong place or format, but half of what I've found so far has been from stack, I thought it was a good place to start. Thanks!

    Read the article

  • How can I execute a SQL query in emacs lisp?

    - by Chris R
    I want to execute an SQL query and get its result in elisp: (let ((results (do-sql-query "SELECT * FROM a_table"))) (do-something-with results)) I'm using Postgres, and I already know all of my connection information (host, username, password, db et al) I just want to execute the query and get the result back, synchronously.

    Read the article

  • Question About DateCreated and DateModified Columns - MS SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet... Any help will be appreciated.

    Read the article

  • How do you determine an acceptable response time for DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • Why wont this sort in Solr work?

    - by Camran
    I need to sort on a date-field type, which name is "mod_date". It works like this in the browser adress-bar: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+desc But I am using a phpSolr client which sends an URL to Solr, and the url sent is this: fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc // This wont work and is echoed after this in php: $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); This wont work, I dont know why! Everything else works fine, all right fields are returned. But the sort doesn't work. Any ideas? Thanks BTW: The field "mod_date" contains something like: 2010-03-04T19:37:22.5Z EDIT: First I use PHP to send this to a SolrPhpClient which is another php-file called service.php: require_once('../SolrPhpClient/Apache/Solr/Service.php'); $solr = new Apache_Solr_Service('localhost', 8983, '/solr/'); $results = $solr->search($querystring, $p, $limit, $solr_params); $solr_params is an array which contains the solr-parameters (q, fq, etc). Now, in service.php: $params['version'] = self::SOLR_VERSION; // common parameters in this interface $params['wt'] = self::SOLR_WRITER; $params['json.nl'] = $this->_namedListTreatment; $params['q'] = $query; $params['sort'] = 'mod_date desc'; // HERE IS THE SORT I HAVE PROBLEM WITH $params['start'] = $offset; $params['rows'] = $limit; $queryString = http_build_query($params, null, $this->_queryStringDelimiter); $queryString = preg_replace('/%5B(?:[0-9]|[1-9][0-9]+)%5D=/', '=', $queryString); if ($method == self::METHOD_GET) { return $this->_sendRawGet($this->_searchUrl . $this->_queryDelimiter . $queryString); } else if ($method == self::METHOD_POST) { return $this->_sendRawPost($this->_searchUrl, $queryString, FALSE, 'application/x-www-form-urlencoded'); } The $results contain the results from Solr... So this is the way I need to get to work (via php). This code below (also on top of this Q) works but thats because I paste it into the adress bar manually, not via the PHPclient. But thats just for debugging, I need to get it to work via the PHPclient: http://localhost:8983/solr/select/?&q=bmw&sort=mod_date+des // Not via phpclient, but works UPDATE (2010-03-08): I have tried Donovans codes (the urls) and they work fine. Now, I have noticed that it is one of the parameters causing the 'SORT' not to work. This parameter is the "wt" parameter. If we take the url from top of this Q, (fq=+category%3A%22Bilar%22+%2B+car_action%3AS%C3%A4ljes&version=1.2&wt=json&json.nl=map&q=%2A%3A%2A&start=0&rows=5&sort=mod_date+desc), and just simply remove the "wt" parameter, then the sort works. BUT the results appear differently, thus making my php code not able to recognize the results I believe. Donovan would know this I think. I am guessing in order for the PHPClient to work, the results must be in a specific structure, which gets messed up as soon as I remove the wt parameter. Donovan, help me please... Here is some background what I use your SolrPhpClient for: I have a classifieds website, which uses MySql. But for the searching I am using Solr to search some indexed fields. Then Solr returns an array of ID:numbers (for all matches of the search criteria). Then I use those ID:numbers to find everything in a MySql db and fetch all other information (example is not searchable information). So simplified: Search - Solr returns all matches in an array of ID:nrs - Id:numbers from Solr are the same as the Id numbers in the MySql db, so I can just make a simple match agains every record with the ID matching the ID from the Solr results array. I don't use Faceting, no boosting, no relevancy or other fancy stuff. I only sort by the latest classified put, and give the option to users to also sort on the cheapest price. Nothing more. Then I use the "fq" parameter to do queries on different fields in Solr depending on category chosen by users (example "cars" in this case which in my language is "Bilar"). I am really stuck with this problem here... Thanks for all help

    Read the article

  • SQL Design: representing a default value with overrides?

    - by Mark Harrison
    I need a sparse table which contains a set of "override" values for another table. I also need to specify the default value for the items overridden. For example, if the default value is 17, then foo,bar,baz will have the values 17,21,17: table "things" table "xvalue" name stuff name xval ---- ----- ---- ---- foo ... bar 21 bar ... baz ... If I don't care about a FK from xvalue.name - things.name, I could simply put a "DEFAULT" name: table "xvalue" name xval ---- ---- DEFAULT 17 bar 21 But I like having a FK. I could have a separate default table, but it seems odd to have 2x the number of tables. table "xvalue_default" xval ---- 17 table "xvalue" name xval ---- ---- bar 21 I could have a "defaults table" tablename attributename defaultvalue xvalue xval 17 but then I run into type issues on defaultvalue. My operations guys prefer as compact a representation as possible, so they can most easily see the "diff" or deviations from the default. What's the best way to represent this, including the default value? This will be for Oracle 10.2 if that makes a difference.

    Read the article

  • MySQL - query to return CSV in a field?

    - by StackOverflowNewbie
    Assume I have the following tables: TABLE: foo - foo_id (PK) TABLE: tag - tag_id (PK) - name TABLE: foo_tag - foo_tag_id (PK) - foo_id (FK) - tag_id (FK) How do I query this so that I get a result like this: ========================== | foo_id | tags | ========================== | 1 | foo, bar | | 2 | foo | | 3 | bar | -------------------------- Basically, I need all of foo's tags in one column, comma separated. Possible in MySQL?

    Read the article

  • How do I modify an attribute across all rows in a table?

    - by prgmatic
    Hi folks, My apologies for asking such a novice question but, I need help building a script using either PHP or directly in MySQL that can do the following: Take the values of a column in a table (text) Change them into capitalized words (from "this is a title" to "This Is A Title") Replace the old values (uncapitalized) with the new values (capitalized). Thanks for the help and support.

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >