Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 336/1300 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Is it possible to combine these 3 mySQL queries?

    - by Greenie
    I know the $downloadfile - and I want the $user_id. By trial and error I found that this does what I want. But it's 3 separate queries and 3 while loops. I have a feeling there is a better way. And yes, I only have a very little idea about what I'm doing :) $result = pod_query("SELECT ID FROM wp_posts WHERE guid LIKE '%/$downloadfile'"); while ($row = mysql_fetch_assoc($result)) { $attachment = $row['ID']; } $result = pod_query("SELECT pod_id FROM wp_pods_rel WHERE tbl_row_id = '$attachment'"); while ($row = mysql_fetch_assoc($result)) { $pod_id = $row['pod_id']; } $result = pod_query("SELECT tbl_row_id FROM wp_pods_rel WHERE tbl_row_id = '$pod_id' AND field_id = '28'"); while ($row = mysql_fetch_assoc($result)) { $user_id = $row['pod_id']; }

    Read the article

  • Asp.net mvc small amount data storage

    - by Trimack
    Hi there, I am writing some learning tests (i.e. what's the answer for...; choose correct options...). Now my question is, how should I store them. SQL db seems quite an overkill, but I really don't know what would be the best choice if I wanted to select random subset of questions etc. Perhaps some simple xml files? Thanks for advice.

    Read the article

  • Syntax for "RETURNING" clause in Mysql PDO

    - by dmontain
    I'm trying to add a record, and at the same time return the id of that record added. I read it's possible to do it with a RETURNING clause. $stmt->prepare("INSERT INTO tablename (field1, field2) VALUES (:value1, :value2) RETURNING id"); but the insertion fails when I add RETURNING. There is an auto-incremented field called id in the table being added to. Can someone see anything wrong with my syntax? or maybe PDO does not support RETURNING?

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • Rails advanced queries with join and sum calculation

    - by Dustin Brewer
    I have two models: companies and expenses. Companies have many expenses and expenses belong to companies. My expense model has an 'amount' column. I was wondering if there is a way to perform a find based on a date range and the amount column of the expenses. Something like top 3 companies by total expense amounts over a 7 day period. I've tried for the better part of the day to get this to work, I've attempted joins, chaining named scopes, raw sql, etc. and I'm not having any luck. Thanks for the help.

    Read the article

  • Rails - Scalable calculation model

    - by H O
    I currently have a calculation structure in my rails app that has models metric, operand and operation_type. Presently, the metric model has many operands, and can perform calculations based on the operation_type (e.g. sum, multiply, etc.), and each operand is defined as being right or left (i.e. so that if the operation is division, the numerator and denominator can be identified). Presently, an operand is always an attribute of some model, e.g. @customer.sales.selling_price.sum. In order to make this scalable, in need to allow an operand to be either an attribute of some kind, or the results of a previous operation, i.e. an operand can be a metric. I have included a diagram of how my models currently look: Can anyone assist me with the most elegant way of allowing an operand to be an actual operand, or another metric? Thanks! EDIT: It seems based on the only answer so far that perhaps polymorphic associations are the way to go on this, but the answer is so brief I have no idea how they could be used in this way - can anyone elaborate? EDIT 2: OK, I think I'm getting somewhere - essentially i presently have a metric, which has_many operands, and an operand has_many metrics. I need a polymorphic self join, where a metric can also have many metrics - do I need to call this something else, perhaps calculated_metrics, so that the metric model can use itself? That would leave me with a situation where a metric has_many operands, and a metric has many calculated_metrics.

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • More efficient method for grabbing all child units

    - by Hazior
    I have a table in SQL that links to itself through parentID. I want to find the children and their children and so forth until I find all the child objects. I have a recursive function that does this but it seems very ineffective. Is there a way to get sql to find all child objects? If so how?

    Read the article

  • Detecting changes between rows with same ID

    - by Noah
    I have a table containing some names and their associated ID, along with a snapshot: snapshot, id, name I need to identify when a name has changed for an id between snapshots. For example, in the following data: 1, 0, 'MOUSE_SPEED' 1, 1, 'MOUSE_POS' 1, 2, 'KEYBOARD_STATE' 2, 0, 'MOUSE_BUTTONS' 2, 1, 'MOUSE_POS' 2, 2, 'KEYBOARD_STATE' ...the meaning of id 0 changed with snapshot 2, but the others remained the same. I'd like to construct a query that (ideally) returns: 1, 0, 'MOUSE_SPEED' 2, 0, 'MOUSE_BUTTONS' I am using PostgreSQL v8.4.2.

    Read the article

  • Static DB Provider in ASP.NET MVC Causing Memory Leak

    - by user364685
    Hi, I have got an app I'm going to write in ASP.NET MVC and I want to create a DatabaseFactory object something like this:- public class DatabaseFactory { private string dbConn get { return <gets from config file>; } public IDatabaseTableObject GetDatabaseTable() { IDatabaseTableObject databaseTableObject = new SQLDatabaseObject(dbConn); return databaseTableObject; } } and this works fine, but I obviously have to instantiate the DatabaseFactory in every controller that needs it. If I made this static, so I could, in theory just call DatabaseFactory.GetDatabaseTable() it would cause a memory leak, wouldn't it?

    Read the article

  • PHP mySQL - replace some string inside string

    - by apis17
    i want to replace ALL comma , into ,<space> in all address table in my mysql table. For example, +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1,Street Name | +----------------+----------------+ Into +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1, Street Name| +----------------+----------------+ Thanks in advance.

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >