Search Results

Search found 31232 results on 1250 pages for 'database partitioning'.

Page 333/1250 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Dummies guide to locking in innodb

    - by ming yeow
    The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

    Read the article

  • In this example, would Customer or AccountInfo properly be the entity group parent?

    - by Badhu Seral
    In this example, the Google App Engine documentation makes the Customer the entity group parent of the AccountInfo entity. Wouldn't AccountInfo encapsulate Customer rather than the other way around? Normally I would think of an AccountInfo class as including all of the information about the Customer. import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.KeyFactory; @PersistenceCapable public class AccountInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; public void setKey(Key key) { this.key = key; } } // ... KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Customer.class.getSimpleName(), "custid985135"); keyBuilder.addChild(AccountInfo.class.getSimpleName(), "acctidX142516"); Key key = keyBuilder.getKey(); AccountInfo acct = new AccountInfo(); acct.setKey(key); pm.makePersistent(acct);

    Read the article

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • Can somebody suggest good learning source of IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • about null values!

    - by user329820
    Hi I have a question that if we declare a variable and then do not set it explicitly to null value then it would be null outomatically ,i mean that the below code will return true or false ? thanks DECLARE @val CHAR(4) If @val = NULL

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • Is there any online free movie information api's?

    - by Gary Willoughby
    For music there is the Gracenote CDDB SDK, etc. but does an online service exist for getting information about movies? The only solution i can see at the minute is querying IMDB and scraping the page. The problem i have is that i have a list of film titles and i want to retrieve stuff like the plot, director, cast, when released, get dvd cover art, etc..

    Read the article

  • mysql filtering result using left outer join

    - by user288178
    my query: SELECT content.*, activity_log.content_id FROM content LEFT JOIN activity_log ON content.id = activity_log.content_id AND sess_id = '$sess_id' WHERE activity_log.content_id IS NULL AND visibility = $visibility AND content.reported < ".REPORTED_LIMIT." AND content.file_ready = 1 LIMIT 1 The purpose of that query is to get 1 row from the content table that has not been viewed by the user (identified by session_id), but it still returns contents that have been viewed. What is wrong? ( I have checked the table making sure that the content_ids are there) Note: I think this is more efficient than using subqueries, thoughts?

    Read the article

  • postgresql is incrementing an update by 2 ?

    - by John Tyler
    I'm migrating our model to postgresql for the FTS and data integrity update myschema.counters set counter_count= (counter_count+1) where counter_id =? Works as expected in mysql, however in postgres it is incrementing by 2 each time? It is simple int field I believe, I don't have anything special going on.

    Read the article

  • Ora-01000 - maximum open cursors exceeded error

    - by PeteDaMeat
    I am receiving the following error message within my Delphi/Oracle application "ora-01000 - maximum open cursors exceeded". The code is as follows: begin for i := 0 to 150 do begin with myADOQuery do begin SQL.Text := 'DELETE FROM SOMETABLE'; ExecSQL; -- from looking at V$OPEN_CURSOR a new cursor is added on each iteration for the session Close; -- thought this would close the cursor but doesn't end; end; end; I'm aware I can resolve the problem by simply increasing the number of OPEN_CURSORS parameters, however, I would rather find a solution whereby the cursor is closed after the query is executed. Any ideas? Delphi 2006 BDS Oracle 10g

    Read the article

  • Django User "per project" group assignation

    - by Ben G
    Hi, Here's my problem : my site has users, which can create projects, and access other user's projects. Each project can assign different rights to users. So, i could have Project A : user "John" is in group "manager" , and Project "B" user "John" is in group "worker". How could I use the Django User authentication model to do that ? From a SQL point a view, what i would like is to be able to add "project_id" in the primary key for the "auth_user_groups" table. I don't think profile is of any help here. Any advice ? UPDATE : "worker" and "manager" are just two examples of the permission group (or "roles") that my application defines. There will be more in the future. Eg : i will probably also have "admin", "reporting", etc...

    Read the article

  • MySQL query does not return any data

    - by Alex L
    Hi, I need to retrieve data from a specific time period. The query works fine until I specify the time period. Is there something wrong with the way I specify time period? I know there are many entries within that time-frame. This query returns empty: SELECT stop_times.stop_id, STR_TO_DATE(stop_times.arrival_time, '%H:%i:%s') as stopTime, routes.route_short_name, routes.route_long_name, trips.trip_headsign FROM trips JOIN stop_times ON trips.trip_id = stop_times.trip_id JOIN routes ON routes.route_id = trips.route_id WHERE stop_times.stop_id = 5508 HAVING stopTime BETWEEN DATE_SUB(stopTime,INTERVAL 1 MINUTE) AND DATE_ADD(stopTime,INTERVAL 20 MINUTE); Here is it's EXPLAIN: +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | 1 | SIMPLE | stop_times | ref | trip_id,stop_id | stop_id | 5 | const | 605 | Using where | | 1 | SIMPLE | trips | eq_ref | PRIMARY,route_id | PRIMARY | 4 | wmata_gtfs.stop_times.trip_id | 1 | | | 1 | SIMPLE | routes | eq_ref | PRIMARY | PRIMARY | 4 | wmata_gtfs.trips.route_id | 1 | | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ 3 rows in set (0.00 sec) The query works if I remove the HAVING clause (don't specify time range). Returns: +---------+----------+------------------+-----------------+---------------+ | stop_id | stopTime | route_short_name | route_long_name | trip_headsign | +---------+----------+------------------+-----------------+---------------+ | 5508 | 06:31:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 06:57:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:23:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:49:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:15:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:41:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 09:08:00 | "80" | "" | "FORT TOTTEN" | I am using Google Transit format Data loaded into MySQL. The query is supposed to provide stop times and bus routes for a given bus stop. For a bus stop, I am trying to get: Route Name Bus Name Bus Direction (headsign) Stop time The results should be limited only to buses times from 1 min ago to 20 min from now. Please let me know if you could help.

    Read the article

  • Select in PL-SQL Errors: INTO After Select

    - by levi
    I've the following query in a test script window declare -- Local variables here p_StartDate date := to_date('10/15/2012'); p_EndDate date := to_date('10/16/2012'); p_ClientID integer := 000192; begin -- Test statements here select d.r "R", e.amount "Amount", e.inv_da "InvoiceData", e.product "ProductId", d.system_time "Date", d.action_code "Status", e.term_rrn "IRRN", d.commiount "Commission", 0 "CardStatus" from docs d inner join ext_inv e on d.id = e.or_document inner join term t on t.id = d.term_id where d.system_time >= p_StartDate and d.system_time <= p_EndDate and e.need_r = 1 and t.term_gr_id = p_ClientID; end Here is the error: ORA-06550: line 9, column 3: PLS-00428: an INTO clause is expected in this SELECT statement I've been using T-SQL for a long time and I'm new to pl-sql What's wrong here?

    Read the article

  • SQL Server Concatenate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • Sqlite and Python -- return a dictionary using fetchone()?

    - by AndrewO
    I'm using sqlite3 in python 2.5. I've created a table that looks like this: create table votes ( bill text, senator_id text, vote text) I'm accessing it with something like this: v_cur.execute("select * from votes") row = v_cur.fetchone() bill = row[0] senator_id = row[1] vote = row[2] What I'd like to be able to do is have fetchone (or some other method) return a dictionary, rather than a list, so that I can refer to the field by name rather than position. For example: bill = row['bill'] senator_id = row['senator_id'] vote = row['vote'] I know you can do this with MySQL, but does anyone know how to do it with SQLite? Thanks!!!

    Read the article

  • importing csv file into pgsql

    - by running4surival
    ok im trying to upload this csv file onto my table in pgsql but im getting this error ERROR: invalid input syntax for integer: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" CONTEXT: COPY members2, line 1, column id: "mlname,mfname,slname,sfname,address,postalcode,membershiptype,hphone,email" i really understand why im getting this error, both my table and my csv file have the same column names

    Read the article

  • Nested mysql select statements

    - by Jimmy Kamau
    I have a query as below: $sult = mysql_query("select * from stories where `categ` = 'businessnews' and `stryid`='".mysql_query("SELECT * FROM comments WHERE `comto`='".mysql_query("select * from stories where `categ` ='businessnews'")." ORDER BY COUNT(comto) DESC")."' LIMIT 3") or die(mysql_error()); while($ow=mysql_fetch_array($sult)){ The code above should return the top 3 'stories' with the most comments {count(comto)}. The comments are stored in a different table from the stories. The code above does not return any values and doesn't show any errors. Could someone please help?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >