Search Results

Search found 27368 results on 1095 pages for 'msaccess to sql'.

Page 673/1095 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • Cakephp Autoconvert find() fields?

    - by Razor Storm
    In cake php I can grab a model's fields by using the find() method. What if I wish to apply a transformation function to the fields? Is there a way to directly accomplish this task? Suppose I have a model called RaceTime with the fields racerId and timeMillis RaceTime +------------+ | Field | +------------+ | id | | racerId | | timeMillis | +------------+ timeMillis is an int specifying how long the race took in milliseconds. Obviously saying a race took 15651 milliseconds isn't very useful to a human reader, and I would wish to convert this to a human readable format. Is there a way to accomplish this directly in find()? Or is the only option to loop through the results after find() finishes?

    Read the article

  • Transfer Data between databases with postgres

    - by user227932
    I need to transfer some data from another Database. The old database is called paw1.moviesDB and the new database is paw1. The schema of each table are the following Awards (name of the table)(new DB) Id [PK] Serial Award Nominations (name of the table) (old DB) Id [PK] Serial nominations I want to copy the data from old DB to the new DB.

    Read the article

  • Select Query Joined on Two Fields?

    - by btollett
    I've got a few tables in an access database: ID | LocationName 1 | Location1 2 | Location2 ID | LocationID | Date | NumProductsDelivered 1 | 1 | 12/10 | 3 2 | 1 | 01/11 | 2 3 | 1 | 02/11 | 2 4 | 2 | 11/10 | 1 5 | 2 | 12/10 | 1 ID | LocationID | Date | NumEmployees | EmployeeType 1 | 1 | 12/10 | 10 | 1 (=Permanent) 2 | 1 | 12/10 | 3 | 2 (=Temporary) 3 | 1 | 12/10 | 1 | 3 (=Support) 4 | 2 | 10/10 | 1 | 1 5 | 2 | 11/10 | 2 | 1 6 | 2 | 11/10 | 1 | 2 7 | 2 | 11/10 | 1 | 3 8 | 2 | 12/10 | 2 | 1 9 | 2 | 12/10 | 1 | 3 What I want to do is pass in the LocationID as a parameter and get back something like the following table. So, if I pass in 2 as my LocationID, I should get: Date | NumProductsDelivered | NumPermanentEmployees | NumSupportEmployees 10/10 | | 1 | 11/10 | 1 | 2 | 1 12/10 | 1 | 2 | 1 It seems like this should be a pretty simple query. I really don't even need the first table except as a way to fill in the combo box on the form from which the user chooses which location they want a report for. Unfortunately, everything I've done has resulted in me getting a lot more data than I should be getting. My confusion is in how to set up the join (presumably that's what I'm looking for here) given that I want both the date and locationID to be the same for each row in the result set. Any help would be much appreciated. Thanks.

    Read the article

  • how to select distinct rows for a column

    - by Satoru.Logic
    Hi, all. I have a table x that's like the one bellow: id | name | observed_value | 1 | a | 100 | 2 | b | 200 | 3 | b | 300 | 4 | a | 150 | 5 | c | 300 | I want to make a query so that in the result set I have exactly one record for one name: (1, a, 100) (2, b, 200) (5, c, 300) If there are multiple records corresponding to a name, say 'a' in the table above, I just pick up one of them. In my current implementation, I make a query like this: select x.* from x , (select distinct name, min(observed_value) as minimum_val from x group by name) x1 where x.name = x1.name and x.observed_value = x1.observed_value; But I think there may be some better way around, please tell me if you know, thanks in advance.

    Read the article

  • MySQL Multiple "AND" Query

    - by Mark J
    I have a table with 2 columns (see below). A member can have multiple responses to a question RESPONSES --------- member_id INT response_id INT SAMPLE DATA member_id -- response_id 1 -- 3 1 -- 5 2 -- 1 2 -- 5 2 -- 9 3 -- 1 3 -- 5 3 -- 6 What I need to do is query the table for member that meet ALL response criteria. For example I need to select all members that have a response_id of 1 AND 5. I am using the following query: SELECT DISTINCT member_id FROM responses WHERE response_id = 1 AND response_id = 5. I would expect to get back member_id's 2,3. However I am getting nothing returned. I used EXPLAIN and it shows there is an error in my where query. What am I doing wrong? Also, is there a function similar to IN where all the criteria must be met in order to return true? Thanks for your help.

    Read the article

  • trigger execution against condition satisfaction

    - by maheshasoni
    I have created this trigger which should give a error, whenever the value of new rctmemenrolno of table-receipts1 is matched with the memenrolno of table- memmast, but it is giving error in both condition(it is matched or not matched). kindly help me. CREATE OR REPLACE TRIGGER HDD_CABLE.trg_rctenrolno before insert ON HDD_CABLE.RECEIPTS1 for each row declare v_enrolno varchar2(9); cursor c1 is select memenrolno from memmast; begin open c1; fetch c1 into v_enrolno; LOOP If :new.rctmemenrolno<>v_enrolno then raise_application_error(-20186,'PLEASE ENTER CORRECT ENROLLMENT NO'); close c1; end if; END LOOP; end;

    Read the article

  • SQL queries to determine all values that would satisfy an arbitrary query

    - by jasterm007
    I'm trying to figure out how to efficiently run a set of queries that will provide a new table of all values that would return results for an arbitrary query. Say my table has a schema like: id name age city What is an efficient way to list all values that would return results for an arbitrary query, say "NOT city=X AND age BETWEEN Y and Z"? My naive approach for this would be to use a script and recurse through all possible combinations of {city, age, age} and see which SELECTs return more than 0 results, but that seems incredibly inefficient. I've also tried building large joins on {city, age, age} as well and basically using that table as an argument list to the query, but that quickly becomes an impossibility for queries on many columns. For simple conjunctive equality queries, i.e. "name=X and age=Y", this is much simpler, as I can do something like SELECT name, age, count(*) AS count FROM main GROUP BY name, age HAVING count > 0 But I'm having difficulty coming up with a general approach for anything more complicated than that. Any pointers in the right direction would be most helpful, thanks.

    Read the article

  • what is the output of this code?

    - by user329820
    Hi,I have wriiten a part of code for you and I want to know the output ,I need your help because there is not any body for helping me also I think that the out put is A ,is this correct? thanks. declare @v1 varchar(20),@v2 varchar(20) select @v1 = 'NULL' if @v1 is null and @v2 is null select 'A' else select 'B'

    Read the article

  • IS NULL vs = NULL in where clause + MSSQL

    - by Nev_Rahd
    Hello How to check a value IS NULL [or] = @param (where @param is null) Ex: Select column1 from Table1 where column2 IS NULL = works fine If I want to replace comparing value (IS NULL) with @param. How can this be done Select column1 from Table1 where column2 = @param = this works fine until @param got some value in it and if is null never finds a record. How can this achieve?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • In the context of an asp.net website, what's the most efficient way to check whether a User has acce

    - by scaramouch
    I have a webpage that you pass in an id parameter (via a querystring), which it then uses to fetch data from a database. Typically, a user would navigate to this page from another page that lists only those records that the user has access to. However, if they go directly to the page by typing in the URL in the Address Bar, they can effectively view any record they like. Eg. If they were to type something like http://localhost/TestSite/ClientAdmin/ManageLocation.aspx?LocationID=5 into their Address Bar, they can access the database record with the LocationID equal to five - even though they shouldn't have access to it. Now, I could solve this by doing a database check every time the page is loaded to see whether the current user has access to the record they're trying to view. However this doesn't seem very efficient given that in most cases a user won't be trying to access a record that isn't theirs. Does anyone have a better suggestion? Thanks.

    Read the article

  • Multisite Enabling a Table

    - by Joe Fitzgibbons
    I am creating a table (table A) that will have a number of columns(of course) and there will be another table (table B) that holds metadata associated to rows in table A. I am working with a multi site implementation that has one database for the whole shabang. Rows in table A could belong to any number of sites but must belong to at least one. The problem I have is I am not sure what the best practice is for defining what site each row in table A belongs to. I want performance and scalability. There is no finite number of sites going forward. Rows in table A could belong to any number of sites in the future. Right now there are only 3. My initial thoughts are to have a primary site ID in Table A and then metadata in table B will have rows defining additional sites as needed. Another thought is to have a column in Table A for each site and it is a boolean as to wether it belongs to that site. Lastly I have thought about having another table to map rows in Table A to each site. What is the best way to associate rows in a table with any number of sites with performance and scalability in mind?

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • need to read data from oracle database with many conditions

    - by randeepsp
    hi! i have 3 tables A,B and C. table A has column employee_name,id table B is the main table and has columns id,os version. table c has the columns id,package id and package version. i want to query the count of employee_name where the id of table a and c are matched with id of table b(which is the main table). i should also get the names of employees grouped by the os version they have and also the package version.

    Read the article

  • R equivalent of SELECT DISTINCT on two or more fields/variables

    - by wahalulu
    Say I have a dataframe df with two or more columns, is there an easy way to use unique() or other R function to create a subset of unique combinations of two or more columns? I know I can use sqldf() and write an easy "SELECT DISTINCT var1, var2, ... varN" query, but I am looking for an R way of doing this. It occurred to me to try ftable coerced to a dataframe and use the field names, but I also get the cross tabulations of combinations that don't exist in the dataset: uniques <- as.data.frame(ftable(df$var1, df$var2))

    Read the article

  • Help needed for writing a Set Based query for finding the highest marks obtained by the students

    - by priyanka.sarkar_2
    I have the below table declare @t table (id int identity, name varchar(50),sub1 int,sub2 int,sub3 int,sub4 int) insert into @t select 'name1',20,30,40,50 union all select 'name2',10,30,40,50 union all select 'name3',40,60,100,50 union all select 'name4',80,30,40,80 union all select 'name5',80,70,40,50 union all select 'name6',10,30,40,80 The desired output should be Id Name Sub1 Sub2 Sub3 Sub4 3 Name3 100 4 Name4 80 80 5 Name5 80 70 6 Name6 80 What I have done so far is ;with cteSub1 as ( select rn1 = dense_rank() over(order by sub1 desc),t.id,t.name,t.sub1 from @t t ) ,cteSub2 as ( select rn2 = dense_rank() over(order by sub2 desc),t.id,t.name,t.sub2 from @t t ) ,cteSub3 as ( select rn3 = dense_rank() over(order by sub3 desc),t.id,t.name,t.sub3 from @t t ) ,cteSub4 as ( select rn4 = dense_rank() over(order by sub4 desc),t.id,t.name,t.sub4 from @t t ) select x1.id,x2.id,x3.id,x4.id ,x1.sub1,x2.sub2,x3.sub3,x4.sub4 from (select c1.id,c1.sub1 from cteSub1 c1 where rn1 =1) as x1 full join (select c2.id,c2.sub2 from cteSub2 c2 where rn2 =1)x2 on x1.id = x2.id full join (select c3.id,c3.sub3 from cteSub3 c3 where rn3 =1)x3 on x1.id = x3.id full join (select c4.id,c4.sub4 from cteSub4 c4 where rn4 =1)x4 on x1.id = x4.id which is giving me the output as id id id id sub1 sub2 sub3 sub4 5 5 NULL NULL 80 70 NULL NULL 4 NULL NULL 4 80 NULL NULL 80 NULL NULL 3 NULL NULL NULL 100 NULL NULL NULL NULL 6 NULL NULL NULL 80 Help needed. Also how can I reduce the number of CTE's?

    Read the article

  • One on One table relation - is it harmful to keep relation in both tables?

    - by EBAGHAKI
    I have 2 tables that their rows have one on one relation.. For you to understand the situation, suppose there is one table with user informations and there is another table that contains a very specific informations and each user can only link to one these specific kind of informations ( suppose second table as characters ) And that character can only assign to the user who grabs it, Is it against the rules of designing clean databases to hold the relation key in both tables? User Table: user_id, name, age, character_id Character Table: character_id, shape, user_id I have to do it for performance, how do you think about it?

    Read the article

  • Update table with index is too slow

    - by pauloya
    Hi, I was watching the Profiler on a live system of our application and I saw that there was an update instruction that we run periodically (every second) that was quite slow. It took around 400ms every time. The query includes this update (which is the slow part) UPDATE BufferTable SET LrbCount = LrbCount + 1, LrbUpdated = getdate() WHERE LrbId = @LrbId This is the table CREATE TABLE BufferTable( LrbId [bigint] IDENTITY(1,1) NOT NULL, ... LrbInserted [datetime] NOT NULL, LrbProcessed [bit] NOT NULL, LrbUpdated [datetime] NOT NULL, LrbCount [tinyint] NOT NULL, ) The table has 2 indexes (non unique and non clustered) with the fields by this order: * Index1 - (LrbProcessed, LrbCount) * Index2 - (LrbInserted, LrbCount, LrbProcessed) When I looked at this I thought that the problem would come from Index1 since LrbCount is changing a lot and it changes the order of the data in the index. But after desactivating index1 I saw the query was taking the same time as initially. Then I rebuilt index1 and desactivated index2, this time the query was very fast. It seems to me that Index2 should be faster to update, the order of the data shouldn't change since the LrbInserted time is not changed. Can someone explain why index2 is much heavier to update then index1? Thank you!

    Read the article

  • filter by atributs php script

    - by cosy
    I have the table : id id_products id_atribut name value 1 13 8 autdio 2.1 2 13 9 hdd 200 Gb 3 13 10 cd-rom 2 4 20 8 audio 2.1 the problem is, how can i select from this table where id_products=13 and name="audio" and value="2.1" and name="hdd" and value="200 gb" .... How can i do this?

    Read the article

  • difference between cn.execute and rs.update?

    - by every_answer_gets_a_point
    i am connecting to mysql from excel using odbc. the following illustrates how i am updating the rs With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With the question is why is there a need to run cn.execute after this? havent i already updated the rs with rs.update?

    Read the article

  • excel:mysql: rs.Update not working

    - by every_answer_gets_a_point
    i am updating a table using an ODBC connection from excel to mysql unfortunately the only column that gets updated is this one: .Fields("instrument") = "NA" where i am assigning variables to .Fields, it is putting NULL values!! what is going on here? here's the code Option Explicit Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim dpath, atime, rtime, lcalib, aname, rname, bstate, instrument As String Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable Worksheets.Item("Report 1").Select dpath = Trim(Range("B2").Text) atime = Trim(Range("B3").Text) rtime = Trim(Range("B4").Text) lcalib = Trim(Range("B5").Text) aname = Trim(Range("B6").Text) rname = Trim(Range("B7").Text) bstate = Trim(Range("B8").Text) ' instrument = GetInstrFromXML(wbBook.FullName) With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = bstate .Fields("instrument") = "NA" .Update ' stores the new record End With ' get the last id Set rs = oConn.Execute("SELECT @@identity", , adCmdText) 'MsgBox capture_id rs.Close Set rs = Nothing End With End Sub

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >