Search Results

Search found 56342 results on 2254 pages for 'object database'.

Page 485/2254 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • about null values!

    - by user329820
    Hi I have a question that if we declare a variable and then do not set it explicitly to null value then it would be null outomatically ,i mean that the below code will return true or false ? thanks DECLARE @val CHAR(4) If @val = NULL

    Read the article

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Dummies guide to locking in innodb

    - by ming yeow
    The typical documentation on locking in innodb is way too confusing. I think it will be of great value to have a "dummies guide to innodb locking" I will start, and I will gather all responses as a wiki: The column needs to be indexed before row level locking applies. EXAMPLE: delete row where column1=10; will lock up the table unless column1 is indexed

    Read the article

  • Ruby on Rails Mongrel web server stuck when MySQL service is running

    - by Marcos Buarque
    Hi, I am a Ruby on Rails newbie and already have a problem. I have started the Mongrel web server and it works fine when MySQL service isn't running. But when MySQL is on, Mongrel stucks. It ceases from serving the pages. So far, I have tested the localhost:3000 URL. When MySQL is off, it serves the page. When I click "about application's environment", I get the messasge (of course) "Can't connect to MySQL server on 'localhost' (10061)". After starting the MySQL service and refreshing, I get no more answer and Mongrel does not serve the webpage. It gets stuck with no answer to the browser. Then I have to stop the webserver and restart it. I have installed mysql2 gem with the command gem install mysql2. I was able to create the _test and _development databases with the command line rake db:create. I have tested with MySQL root user and blank password and also tried with a superuser user I have created. No success. Here is the server log: ======================== Started GET "/rails/info/properties" for 127.0.0.1 at Fri Dec 24 17:41:25 -0200 2010 Mysql2::Error (Can't connect to MySQL server on 'localhost' (10061)): Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (5.0ms) Rendered C:/Ruby187/lib/ruby/gems/1.8/gems/actionpack-3.0.3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (35.0ms) ================= I am running on a Windows 7 environment with firewall down.

    Read the article

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • In this example, would Customer or AccountInfo properly be the entity group parent?

    - by Badhu Seral
    In this example, the Google App Engine documentation makes the Customer the entity group parent of the AccountInfo entity. Wouldn't AccountInfo encapsulate Customer rather than the other way around? Normally I would think of an AccountInfo class as including all of the information about the Customer. import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.Key; import com.google.appengine.api.datastore.KeyFactory; @PersistenceCapable public class AccountInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; public void setKey(Key key) { this.key = key; } } // ... KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Customer.class.getSimpleName(), "custid985135"); keyBuilder.addChild(AccountInfo.class.getSimpleName(), "acctidX142516"); Key key = keyBuilder.getKey(); AccountInfo acct = new AccountInfo(); acct.setKey(key); pm.makePersistent(acct);

    Read the article

  • MySQL: automatic rollback on transaction failure

    - by praksant
    Is there any way to set MySQL to rollback any transaction on first error/warning automatically? Now if everything goes well, it commits, but on failure it leaves transaction open and on another start of transaction it commits incomplete changes from failed transaction. (I'm executing queries from php, but i don't want to check in php for failure, as it would make more calls between mysql server and webserver.) Thank you

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • MySQL query does not return any data

    - by Alex L
    Hi, I need to retrieve data from a specific time period. The query works fine until I specify the time period. Is there something wrong with the way I specify time period? I know there are many entries within that time-frame. This query returns empty: SELECT stop_times.stop_id, STR_TO_DATE(stop_times.arrival_time, '%H:%i:%s') as stopTime, routes.route_short_name, routes.route_long_name, trips.trip_headsign FROM trips JOIN stop_times ON trips.trip_id = stop_times.trip_id JOIN routes ON routes.route_id = trips.route_id WHERE stop_times.stop_id = 5508 HAVING stopTime BETWEEN DATE_SUB(stopTime,INTERVAL 1 MINUTE) AND DATE_ADD(stopTime,INTERVAL 20 MINUTE); Here is it's EXPLAIN: +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ | 1 | SIMPLE | stop_times | ref | trip_id,stop_id | stop_id | 5 | const | 605 | Using where | | 1 | SIMPLE | trips | eq_ref | PRIMARY,route_id | PRIMARY | 4 | wmata_gtfs.stop_times.trip_id | 1 | | | 1 | SIMPLE | routes | eq_ref | PRIMARY | PRIMARY | 4 | wmata_gtfs.trips.route_id | 1 | | +----+-------------+------------+--------+------------------+---------+---------+-------------------------------+------+-------------+ 3 rows in set (0.00 sec) The query works if I remove the HAVING clause (don't specify time range). Returns: +---------+----------+------------------+-----------------+---------------+ | stop_id | stopTime | route_short_name | route_long_name | trip_headsign | +---------+----------+------------------+-----------------+---------------+ | 5508 | 06:31:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 06:57:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:23:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 07:49:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:15:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 08:41:00 | "80" | "" | "FORT TOTTEN" | | 5508 | 09:08:00 | "80" | "" | "FORT TOTTEN" | I am using Google Transit format Data loaded into MySQL. The query is supposed to provide stop times and bus routes for a given bus stop. For a bus stop, I am trying to get: Route Name Bus Name Bus Direction (headsign) Stop time The results should be limited only to buses times from 1 min ago to 20 min from now. Please let me know if you could help.

    Read the article

  • postgresql is incrementing an update by 2 ?

    - by John Tyler
    I'm migrating our model to postgresql for the FTS and data integrity update myschema.counters set counter_count= (counter_count+1) where counter_id =? Works as expected in mysql, however in postgres it is incrementing by 2 each time? It is simple int field I believe, I don't have anything special going on.

    Read the article

  • In a star schema, are foreign key constraints between facts and dimensions neccessary?

    - by Garett
    I'm getting my first exposure to data warehousing, and I’m wondering is it necessary to have foreign key constraints between facts and dimensions. Are there any major downsides for not having them? I’m currently working with a relational star schema. In traditional applications I’m used to having them, but I started to wonder if they were needed in this case. I’m currently working in a SQL Server 2005 environment.

    Read the article

  • Is there any online free movie information api's?

    - by Gary Willoughby
    For music there is the Gracenote CDDB SDK, etc. but does an online service exist for getting information about movies? The only solution i can see at the minute is querying IMDB and scraping the page. The problem i have is that i have a list of film titles and i want to retrieve stuff like the plot, director, cast, when released, get dvd cover art, etc..

    Read the article

  • Select in PL-SQL Errors: INTO After Select

    - by levi
    I've the following query in a test script window declare -- Local variables here p_StartDate date := to_date('10/15/2012'); p_EndDate date := to_date('10/16/2012'); p_ClientID integer := 000192; begin -- Test statements here select d.r "R", e.amount "Amount", e.inv_da "InvoiceData", e.product "ProductId", d.system_time "Date", d.action_code "Status", e.term_rrn "IRRN", d.commiount "Commission", 0 "CardStatus" from docs d inner join ext_inv e on d.id = e.or_document inner join term t on t.id = d.term_id where d.system_time >= p_StartDate and d.system_time <= p_EndDate and e.need_r = 1 and t.term_gr_id = p_ClientID; end Here is the error: ORA-06550: line 9, column 3: PLS-00428: an INTO clause is expected in this SELECT statement I've been using T-SQL for a long time and I'm new to pl-sql What's wrong here?

    Read the article

  • Can somebody suggest good learning source of IMS?

    - by Raja Reddy
    I would like to learn working with IMS, can somebody suggest me a good source? I'm not sure if it matters to say that I have quite good exposure and experience with INSYNC DB2 and QMF. So anything that can depict and explain the advantages and disadvantages over IMS would be really helpful. Thanks for your help beforehand..

    Read the article

  • Ora-01000 - maximum open cursors exceeded error

    - by PeteDaMeat
    I am receiving the following error message within my Delphi/Oracle application "ora-01000 - maximum open cursors exceeded". The code is as follows: begin for i := 0 to 150 do begin with myADOQuery do begin SQL.Text := 'DELETE FROM SOMETABLE'; ExecSQL; -- from looking at V$OPEN_CURSOR a new cursor is added on each iteration for the session Close; -- thought this would close the cursor but doesn't end; end; end; I'm aware I can resolve the problem by simply increasing the number of OPEN_CURSORS parameters, however, I would rather find a solution whereby the cursor is closed after the query is executed. Any ideas? Delphi 2006 BDS Oracle 10g

    Read the article

  • SQL Server Concatenate string column value to 5 char long

    - by mrp
    Scenario: I have a table1(col1 char(5)); A value in table1 may '001' or '01' or '1'. Requirement: Whatever value in col1, I need to retrive it in 5 char length concatenate with leading '0' to make it 5 char long. Technique I applied: select right(('00000' + col1),5) from table1; I didn't see any reason, why it doesn't work? but it didn't. Can anyone help me, how I can achieve the desired result?

    Read the article

  • Creating Two Cascading Foreign Keys Against Same Target Table/Col

    - by alram
    I have the following tables: user (userid int [pk], name varchar(50)) action (actionid int [pk], description nvarchar(50)) being referenced by another table that captures the relationship: <user1> <action>'s <user2>. I did this with the following table: userAction (userActionId int [pk], actionid int [fk: action.actionid], **userId1 int [fk ref's user.userid; on del/update cascade], userId2 int [fk ref's user.userid; on del/update cascade]**). However, when I try to save the userAction table i get an error because I have two cascading fk's against user.userid. Is there any way to remedy this or must I use a trigger?

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >