Search Results

Search found 28985 results on 1160 pages for 'sql training'.

Page 691/1160 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • Getting records from a table based on a filter field and Between but also having the OR login for mu

    - by Pentium10
    I have a this table, where I store multiple ids and an age range (def1,def2) CREATE TABLE "template_requirements" ("_id" INTEGER NOT NULL, "templateid" INTEGER, "def1" VARCHAR(255), "def2" VARCHAR(255), PRIMARY KEY("_id")) Having values such as: templateid | def1 | def2 100 | 7 | 25 200 | 40 | 90 300 | 7 | 25 300 | 40 | 60 as you see for templateid 300 we have an or logic: age between 7 and 25 or age between 40 and 60. I want to get all the template ids that are not for a certain age like 25... What's the problem? If I run a query like this one: SELECT group_concat(templateid) FROM template_requirements where and '25' not between cast(def1 as integer) and cast(def2 as integer) it returns 200, 300, which is wrong, as the 300 matched on row 40 to 60, but shouldn't be included in the result as we have a condition with same templateid 7 to 25 that fails the not beetween stuff. How would be the correct query in SQLite, I would like to keep the group_concat stuff.

    Read the article

  • MYSQL: COUNT with GROUP BY, LEFT JOIN and WHERE clause doesn't return zero values

    - by Paul Norman
    Hi guys, thanks in advance for any help on this topic! I'm sure this has a very simply answer, but I can't seem to find it (not sure what to search on!). A standard count / group by query may look like this: SELECT COUNT(`t2`.`name`) FROM `table_1` `t1` LEFT JOIN `table_2` `t2` ON `t1`.`key_id` = `t2`.`key_id` GROUP BY `t1`.`any_col` and this works as expected, returning 0 if no rows are found. So does: SELECT COUNT(`t2`.`name`) FROM `table_1` `t1` LEFT JOIN `table_2` `t2` ON `t1`.`key_id` = `t2`.`key_id` WHERE `t1`.`another_column` = 123 However: SELECT COUNT(`t2`.`name`) FROM `table_1` `t1` LEFT JOIN `table_2` `t2` ON `t1`.`key_id` = `t2`.`key_id` WHERE `t1`.`another_column` = 123 GROUP BY `t1`.`any_col` only works if there is at least one row in table_1 and fails miserably returning an empty result set if there are zero rows. I would really like this to return 0! Anyone enlighten me on this? Beer can be provided in exchange if you are in London ;-)

    Read the article

  • How to count number of occurences for all different values in database column?

    - by drasto
    I have a Postgre database that has say 10 columns. The fifth column is called column5. There are 100 rows in the database and possible values of column5 are c5value1, c5value2, c5value3...c5value29, c5value30. I would like to print out a table that shows how many times each value occurs. So the table would look like this: Value(of column5) number of occurrences of the value c5value1 1 c5value2 5 c5value3 3 c5value4 9 c5value5 1 c5value6 1 . . . . . . What is the command that does that? Thanks for help

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

  • How to handle Foreign Keys with Entity Framework

    - by Jack Marchetti
    I have two entities. Groups. Pools. A Group can create many pools. So I setup my Pool table to have a GroupID foreign key. My code: using (entity _db = new entity()) { Pool p = new Pool(); p.Name = "test"; p.Group.ID = "5"; _db.AddToPool(p); } This doesn't work. I get a null reference exception on p.Group. How do I go about creating a new "Pool" and associating a GroupID?

    Read the article

  • How to group by having the same id?

    - by simpatico
    Hello, I want the customerid who bought product X and Y and Z, from the following schema: Sales(customerid, productName, rid); I could do the intersection: select customerid from sales where productName='X' INTERSECT select customerid from sales where productName='X' INTERSTECT select customerid from sales where productName='Z' Is this the best I could do?

    Read the article

  • Facing trouble in retrieving relevant records

    - by Umaid
    SELECT * from MainCategory where Month = 'May' and Day in ((cast(strftime('%d',date('now','-1 day')) as Integer)),(cast(strftime('%d',date('now')) as Integer)),(cast(strftime('%d',date('now','+1 day')) as Integer))); Whenever I run this query in sqlite so it returns me 33 records instead of 3. I am insterested in fetching on 3 records of the current month but unable to do so, so plz assist. --Please note: if you can't assist so plz don't post irrelevant answer. I have also modified and try to make it simple but not achieve Select day, month from MainCategory where Month = 'May' and day in ((date('now','-1 day')),(date('now')),(date('now','+1 day')))

    Read the article

  • Transfer Data between databases with postgres

    - by user227932
    I need to transfer some data from another Database. The old database is called paw1.moviesDB and the new database is paw1. The schema of each table are the following Awards (name of the table)(new DB) Id [PK] Serial Award Nominations (name of the table) (old DB) Id [PK] Serial nominations I want to copy the data from old DB to the new DB.

    Read the article

  • what is the output of this code?

    - by user329820
    Hi,I have wriiten a part of code for you and I want to know the output ,I need your help because there is not any body for helping me also I think that the out put is A ,is this correct? thanks. declare @v1 varchar(20),@v2 varchar(20) select @v1 = 'NULL' if @v1 is null and @v2 is null select 'A' else select 'B'

    Read the article

  • Client to server data upload

    - by RickBowden
    I'm trying to design a system similar to the traditional server monitoring systems like MOM, Tivoli, Open View, where an agent will record data and then upload it to a central database once a day, but them also be able to send immediate alerts back to the server. I'm not sure what the best methodology might be for this. I've started looking at Microsoft sync services but I'm not sure if it will fit my needs. I'm using VS2008 and C#. Does anyone have any experience or ideas about how I should go about this task?

    Read the article

  • how to select distinct rows for a column

    - by Satoru.Logic
    Hi, all. I have a table x that's like the one bellow: id | name | observed_value | 1 | a | 100 | 2 | b | 200 | 3 | b | 300 | 4 | a | 150 | 5 | c | 300 | I want to make a query so that in the result set I have exactly one record for one name: (1, a, 100) (2, b, 200) (5, c, 300) If there are multiple records corresponding to a name, say 'a' in the table above, I just pick up one of them. In my current implementation, I make a query like this: select x.* from x , (select distinct name, min(observed_value) as minimum_val from x group by name) x1 where x.name = x1.name and x.observed_value = x1.observed_value; But I think there may be some better way around, please tell me if you know, thanks in advance.

    Read the article

  • Is there a way to force Report Builder to use "WITH (NOLOCK)" in the queries it generates?

    - by Joe Pineda
    Hi. At work, users are very happy to generate their own reports using Reporting Services' Report Builder. But, alas, the queries it generates are very inefficient, and they don't use "WITH (NOLOCK)" - slowing down things for everyone. These are reports that really do need to be run using latest data - can't be offloaded to the reporting server. And since they query very specific, detailed data, hypercubes are of no use here. So the question is: Is there a way to configure Report Builder's Data Models so the queries it generates always use "WITH (NOLOCK)" when querying a table?

    Read the article

  • Problem with sending "SetCookie" first in php code

    - by Camran
    According to this manual: http://us2.php.net/setcookie I have to set the cookie before anything else. Here is my cookie code: if (isset($_COOKIE['watched_ads'])){ $expir = time()+1728000; //20 days $ad_arr = unserialize($_COOKIE['watched_ads']); $arr_elem = count($ad_arr); if (in_array($ad_id, $ad_arr) == FALSE){ if ($arr_elem>10){ array_shift($ad_arr); } $ad_arr[]=$ad_id; setcookie('watched_ads', serialize($ad_arr), $expir, '/'); } } else { $expir = time()+1728000; //20 days $ad_arr[] = $ad_id; setcookie('watched_ads', serialize($ad_arr), $expir, '/'); } As you can see I am using variables in setting the cookie. The variables comes from a mysql_query and I have to do the query first. But then, if I do, I will get an error message: Cannot modify header information - headers already sent by ... The error points to the line where I set the cookie above. What should I do?a

    Read the article

  • In the context of an asp.net website, what's the most efficient way to check whether a User has acce

    - by scaramouch
    I have a webpage that you pass in an id parameter (via a querystring), which it then uses to fetch data from a database. Typically, a user would navigate to this page from another page that lists only those records that the user has access to. However, if they go directly to the page by typing in the URL in the Address Bar, they can effectively view any record they like. Eg. If they were to type something like http://localhost/TestSite/ClientAdmin/ManageLocation.aspx?LocationID=5 into their Address Bar, they can access the database record with the LocationID equal to five - even though they shouldn't have access to it. Now, I could solve this by doing a database check every time the page is loaded to see whether the current user has access to the record they're trying to view. However this doesn't seem very efficient given that in most cases a user won't be trying to access a record that isn't theirs. Does anyone have a better suggestion? Thanks.

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • select records from table in the order in which i inserted

    - by echo
    consider a tale is as follows, EmployeeId | Name | Phone_Number Now, i insert 10 records... When i query them back, select * from myTable they are not selected in the order i inserted. I can obviously keep an autoincrement index and ORDER BY index. But i dont want to alter the table. How can i do this without altering the table?

    Read the article

  • is Payment table needed when you have an invoice table like this?

    - by EBAGHAKI
    this is my invoice table: Invoice Table: invoice_id creation_date due_date payment_date status enum('not paid','paid','expired') user_id total_price I wonder if it's Useful to have a payment table in order to record user payments for invoices. payment table can be like this: payment_id payment_date invoice_id price_paid status enum('successful', 'not successful')

    Read the article

  • Cakephp Autoconvert find() fields?

    - by Razor Storm
    In cake php I can grab a model's fields by using the find() method. What if I wish to apply a transformation function to the fields? Is there a way to directly accomplish this task? Suppose I have a model called RaceTime with the fields racerId and timeMillis RaceTime +------------+ | Field | +------------+ | id | | racerId | | timeMillis | +------------+ timeMillis is an int specifying how long the race took in milliseconds. Obviously saying a race took 15651 milliseconds isn't very useful to a human reader, and I would wish to convert this to a human readable format. Is there a way to accomplish this directly in find()? Or is the only option to loop through the results after find() finishes?

    Read the article

  • IS NULL vs = NULL in where clause + MSSQL

    - by Nev_Rahd
    Hello How to check a value IS NULL [or] = @param (where @param is null) Ex: Select column1 from Table1 where column2 IS NULL = works fine If I want to replace comparing value (IS NULL) with @param. How can this be done Select column1 from Table1 where column2 = @param = this works fine until @param got some value in it and if is null never finds a record. How can this achieve?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • mysql: managing memory usage

    - by every_answer_gets_a_point
    i am doing a delete with a LIKE statement my keybuffer is 25m, the sort buffer size is 256k the delete has been taking over 2 hours should i increase memory usage? there are about 50 megs of data in the table from which i am deleting, thats about 500,000 rows is there anything else i can do on the adminsitration size to speed up this delete?

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >