Search Results

Search found 31357 results on 1255 pages for 'database indexes'.

Page 306/1255 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

  • Any disadvantages to this tagging approach

    - by donpal
    I have a table of tags ID tag --- ------ 1 tagt 2 tagb 3 tagz 4 tagn In my items table, I'm using those tags in serialized format, comma delimited ID field1 tags ---- ------ ----- 1 value1 tagt,tagb 2 value2 tagb 3 value3 tagb,tagn 4 value4 When I need to find items that have this tag, I plan to deserialize the tags. But I'm not actually sure how to do it, and if it's better to have a third table for associations instead between the tags and the items.

    Read the article

  • Non-Access or -base QBE tool?

    - by idbin.cath0br0
    Hello. I'm currently looking for a QBE tool that can execute queries on PostgreSQL or MySQL. OS doesn't really matter. Reason is that we've got to do QBE at school but I don't want to use neither Microsoft Access nor OpenOffice.org Base (lack of features). Any help would be appreciated.

    Read the article

  • Need help with SQL Query

    - by StackOverflowNewbie
    Say I have 2 tables: Person - Id - Name PersonAttribute - Id - PersonId - Name - Value Further, let's say that each person had 2 attributes (say, gender and age). A sample record would be like this: Person->Id = 1 Person->Name = 'John Doe' PersonAttribute->Id = 1 PersonAttribute->PersonId = 1 PersonAttribute->Name = 'Gender' PersonAttribute->Value = 'Male' PersonAttribute->Id = 2 PersonAttribute->PersonId = 1 PersonAttribute->Name = 'Age' PersonAttribute->Value = '30' Question: how do I query this such that I get a result like this: 'John Doe', 'Male', '30'

    Read the article

  • cached data base

    - by radi
    hi , in my project i need a tow tables each of it has about 2000 row , i want my application to be speed so my db should load into memory (cached) when the app start and before it close the db have to be saved on the disk . i am using java and i want to use sql

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Are there any examples/tutorials of using Spring 3.0 with Cassandra as a backend?

    - by zeroDivisible
    Hello, As I had written in title, I am trying to learn Spring 3.0 (I already know Django, Pylons and few simpler MVC frameworks) and try to use Cassandra as a backend for my web application. Are there any real world examples of doing this? Or maybe some tutorials? I know about the existence of documentation of both technologies, yet I am looking for something "faster" to read and get me rolling.

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • What is a good DBMS for archiving?

    - by Thomas.Winsnes
    I've been stuck in a MsSql/MySql world now for a few years, and I've decided to spread my wings a little further. At the moment I'm researching which DBMS is good at things needed when archiving data. Eg. lots of writes and low reads. I've seen the NoSQL crusade, but I have a very RDBMS mindset, so I'm a bit skeptical. Anyone have any suggestions? Or even any pointers to where there are some benchmarks etc for this kind of stuff. Thank you :) Thomas

    Read the article

  • MySQL Query order by numer of rows?

    - by Clemens
    hi, i have a mysql table for votes. there's am id, a project_id and a vote field (which is 1 if a specific project is voted). now i want to generate a ranking from those entries. is there a way to get the number of votes for each project_id and automatically sort the entries by the number of TRUE votes of a project with a single mysql query? Or do you know a php way? e.g. ID - Project ID - Vote 1 - 2 - 1 2 - 2 - 1 3 - 1 - 1 == Project Nr. 2 has 2 Votes Project Nr. 1 has 1 Vote Thanks in advance!

    Read the article

  • BI with Django?

    - by Helmut
    Is there a way to develop Bi (Business Intelligence) solutions with Django? Therefore it should be possible to define models with more than one Datasource. Is anybody out there who has experienced BI with Django? How could it work ?

    Read the article

  • Transaction Isolation on select, insert, delete

    - by Bradford
    What could possibly go wrong with the following transaction if executed by concurrent users in the default isolation level of READ COMMITTED? BEGIN TRANSACTION SELECT * FROM t WHERE pid = 10 and r between 40 and 60 -- ... this returns tid = 1, 3, 5 -- ... process returned data ... DELETE FROM t WHERE tid in (1, 3, 5) INSERT INTO t (tid, pid, r) VALUES (77, 10, 35) INSERT INTO t (tid, pid, r) VALUES (78, 10, 37) INSERT INTO t (tid, pid, r) VALUES (79, 11, 39) COMMIT

    Read the article

  • What's wrong with this SQL query?

    - by ThinkingInBits
    I have two tables: photographs, and photograph_tags. Photograph_tags contains a column called photograph_id (id in photographs). You can have many tags for one photograph. I have a photograph row related to three tags: boy, stream, and water. However, running the following query returns 0 rows SELECT p.* FROM photographs p, photograph_tags c WHERE c.photograph_id = p.id AND (c.value IN ('dog', 'water', 'stream')) GROUP BY p.id HAVING COUNT( p.id )=3 Is something wrong with this query?

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • More efficient method for grabbing all child units

    - by Hazior
    I have a table in SQL that links to itself through parentID. I want to find the children and their children and so forth until I find all the child objects. I have a recursive function that does this but it seems very ineffective. Is there a way to get sql to find all child objects? If so how?

    Read the article

  • Best practices for handling unique constraint violation

    - by umesh awasthi
    Hi All, While working in my application i came across a situation in which there are likely chances to Unque Constraints Violation.I have following options Catch the exception and throw it back to UI At UI check for the exception and show approrpriate Error Message This is something different idea is to Check in advance about the existance of the given Unique value before starting the whole operation. My Question is what might be the best practice to handle such situation.Currently we are using combo of Struts2+Spring 3.x+Hibernate 3.x Thanks in advance

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >