Search Results

Search found 31421 results on 1257 pages for 'entity sql'.

Page 678/1257 | < Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >

  • outer join for parent child chain

    - by dotnetcoder
    Considering below tables and relationships: parent --1:Many-- children --1:Many-- shubchildren Parent may or many not have children records. children always have subchildren records. I wan to write a qiery to select parent names where any if matched parent.name,children.name or subchildren.name Here i understand I have to do a left outer join between parent and children. But what kind of join should I put between children and subchildren ?

    Read the article

  • Replace into equivalent for postgresql and then autoincrementing an int

    - by Mohamed Ikal Al-Jabir
    Okay no seriously, if a postgresql guru can help out I'm just getting started. Basically what I want is a simple table like such: CREATE TABLE schema.searches ( search_id serial NOT NULL, search_query character varying(255), search_count integer DEFAULT 1, CONSTRAINT pkey_search_id PRIMARY KEY (search_id) ) WITH ( OIDS=FALSE ); I need something like REPLACE INTO for mysql. I don't know if I have to write my own procedure or something? Basically: check if the query already exists if so, just add 1 to the count it not, add it to the db I can do this in my php code but I'd rather all that be done in postgres C engine Thanks for helping

    Read the article

  • Many to Many Association Tables - Is it customary to put additional columns in these tables?

    - by Randy Minder
    We've encountered the following situation in our database. We have table 'A' and table 'B' which have a M2M relationship. The association table is named 'AB' and contains a FK column to table 'A' and a FK column to table 'B'. Now we've identified a need to store additional data about this association. For example, a date when the association occurred, and who made the association etc. We've decided to put these additional columns in the 'AB' association table. However, something tells me this is frowned upon by database purists. On the other hand, it makes no sense to us to create yet an additional table to store this associated data. What's the prevailing thought on this?

    Read the article

  • Linked Measure Groups and Local Dimensions

    - by ekoner
    Mulling over something I've been reading up on. According to Chris Webb, A linked measure group can only be used with dimensions from the same database as the source measure group. So I took this to mean as long as two cubes share a database, a linked measure group can be used with a dimension. So I created a new cube and added a local measure group, a local dimension and a linked measure group. However, I can't create a relationship between the linked measure group and the local dimension even though they are within the same database. I get the message below: Regular relationships in the current database between non-linked (local) dimensions and linked measure groups cannot be edited. These relationship can only be created through the wizard. This dialog can be used to delete these relationships. I see that I can go to the original cube and add the dimension there, but does the message below mean I have an alternative? I just know it's going to be something simple and trivial! Thanks for reading.

    Read the article

  • How to give weight to full matches over partial matches (PostgreSQL)

    - by kagaku
    I've got a query that takes an input searches for the closet match in zipcode/region/city/metrocode in a location table containing a few tens of thousands of entries (should be nearly every city in the US). The query I'm using is: select metrocode, region, postalcode, region_full, city from dv_location where ( region ilike '%Chicago%' or postalcode ilike '%Chicago%' or city ilike '%Chicago%' or region_full ilike'%Chicago%' ) and metrocode is not null Odd thing is, the results set I'm getting back looks like this: metrocode;region;postalcode;region_full;city 862;CA;95712;California;Chicago Park 862;CA;95712;California;Chicago Park 602;IL;60611;Illinois;Chicago 602;IL;60610;Illinois;Chicago What am I doing wrong? My thinking is that Chicago would have greater weight than Chicago Park since Chicago is an exact match to the term (even though I'm asking for a wildcard match on the term).

    Read the article

  • sqlite3 timestamp column

    - by Flavius
    Hi I feel stupid, but I can't get a TIMESTAMP column to be shown in human understandable way in a SELECT. I could do that in MySQL, not in sqlite3. Could someone show me an example please? Thanks

    Read the article

  • Howto monitor traffic between IIS and MSSQL

    - by kockiren
    Hello @all, i try to check how much traffic flows between MSSQL Server and IIS Server in different Locations. There are 1 ipcop in every Location and i download the tcpdump file from one Firewall and search for DST=ipmssql and SRC=ipIIS but i did not find the ip from the Database Server. But there are traffic between both. Any suggestions why i did not find the IP Adress from the MSSQL Server? Is this an configuration failure in IPCop or is the Traffic between ISS and MSSQL so strange :-) Regards Rene

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Jasper error: Caused by SQLServerException: Transaction (Process ID 58) was deadlocked on thread | c

    - by Saky
    I got the above error in my jasper report mail. The query that is used in the report is quite complicated (for me). Reading different posts I conclude that to solve this the I have to change the query to SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO BEGIN TRANSACTION ... my query ... COMMIT TRANSACTION ? I wonder if this is the correct way to solve the error and that if it has any side effects? Has it happened to anyone in the Jasper reports? Does anyone know if there is a better solution exist to the problem? (Although that I have not yet tested the above solution, if anyone can give any insight on this will be helpful.)

    Read the article

  • Getting an entry before and after a given entry in a Django Queryset

    - by Vernon
    I am creating a simple blog as part of a website and I am getting stuck on something that I am assuming is simple. If I call any blog post, say by it's title, from a queryset, how can I get the entry before and after the post in it's published order. I can iterate over the whole thing, get the position of the entry I have and use that to call the one before and the one after. But that is a long bit of code for something that I am sure I can do more simply. What I want would be something like this: next_post = Posts.object.filter(title=current_title).order_by("-published")[-1] Of course because of the filter, it is not going to work, but just to give you the idea of what I am looking for.

    Read the article

  • deleting records from multiple tables at a time with a single query in sqlserver2005

    - by sudhavamsikiran
    Hi I wanna delete records from child tables as well as parent table with in a single query. please find the query given below. here response header is the primary table and responseid is the primary key. DELETE FROM responseheader FROM responseheader INNER JOIN responsepromotion ON responseheader.responseid = responsepromotion.ResponseID INNER JOIN responseext ON responsepromotion.ResponseID=responseext.ResponseID WHERE responseheader.responseid In ('67D8B9E8-BAD2-42E6-BAEA-000025D56253') but its throwing error . can any one help me to find out the correct query

    Read the article

  • access: print report question

    - by I__
    here's the design view of my report: how do i force it to print only one set of these per page, because currently it is printing like this: i want it it print only one set of these controls per page

    Read the article

  • Error: The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value

    - by CPM
    I know that there are simmilar questions like this on the forum, however I am still having problems to update a datetime field o the database. I dont get any problems when inserting but I get problems when updating and I am formating the same way , like this: e.Values.Item("SelectionStartDate") = Format(startdate, "yyyy-MM-dd") + " " + startTime1 + ".000" startTime is of type string. I have tried different solution that I came across on the internet but still get this error. Please help. Thanks in advance

    Read the article

  • how to do complete multiple replaces throughout table in ms-access

    - by silverkid
    i am a little confused in finding out what would be the best way to replace all occurances of 1. Blanks 2. - 3. NA from all collumns of TableA with question mark ? charachter. Sample Row in orignal tableA 444586 RAUR <blank> 8 570 NA - 13 - SCHS299 MP 339 70 EN <blank> Same Row in Expected TableA 444586 RAUR ? 8 570 ? ? 13 ? SCHS299 MP 339 70 EN ? please help me out I cant use the Find Replace Toolbar of access.

    Read the article

  • Why DB constraints are not added during table creation.

    - by Pratik
    Hi All, What is the difference between these to ways of table creation. CREATE TABLE TABLENAME( field1.... field2... add constraint constraint1; add constraint constraint2; ) AND CREATE TABLE TABLENAME( field1.... field2... ) ALTER TABLE TABLENAME add constaint1 ALTER TABLE TABLENAME add constaint2 Moreover the first scripts fails on the SQL+ but they pass on sqldeveloper Thanks! Pratik

    Read the article

  • Convert a value based on range

    - by Chris
    I need to convert a number to another value based on a range: ie: 7 = "A" 106 = "I" I have a range like this: from to return-val 1 17 A 17 35 B 35 38 C 38 56 D 56 72 E 72 88 F 88 98 G 98 104 H 104 115 I 115 120 J 120 123 K 123 129 L 129 infinity M The values are fixed and do not change. I was thinking a lookup table would be required, but is there a way it could be done with a function on an analytics function inside of oracle?

    Read the article

  • mysql - join tables by unique field

    - by Qiao
    I have two tables with the same structure: id name 1 Merry 2 Mike and id name 1 Mike 2 Alis I need to join second table to first with keeping unique names, so that result is: id name 1 Merry 2 Mike 3 Alis Is it possible to do this with MySQL query, without using php script?

    Read the article

  • Update table variable with function

    - by Joris
    I got a table variable @RQ, I want it updated using a table-valued function. Now, I think I do the update wrong, because my function works... The function: ALTER FUNCTION [dbo].[usf_GetRecursiveFoobar] ( @para int, @para datetime, @para varchar(30) ) RETURNS @ReQ TABLE ( Onekey int, Studnr nvarchar(10), Stud int, Description nvarchar(32), ECTSGot decimal(5,2), SBUGot decimal(5,0), ECTSmax decimal(5,2), SBUmax decimal(5,0), IsFree bit, IsGot int, DateGot nvarchar(10), lvl int, path varchar(max) ) AS BEGIN; WITH RQ AS ( --RECURSIVE QUERY ) INSERT @ReQ SELECT RQ.Onekey, RQ.Studnr, RQ.Stud, RQ.Description, RQ.ECTSGot, RQ.SBUGot, RQ.ECTSmax, RQ.SBUmax, RQ.IsFree, RQ.IsGot, RQ.DatumGot, RQ.lvl, RQ.path FROM RQ RETURN END Now, when I run a simple query: DECLARE @ReQ TABLE ( OnderwijsEenheid_key int, StudentnummerHSA nvarchar(10), Student_key int, Omschrijving nvarchar(32), ECTSbehaald decimal(5,2), SBUbehaald decimal(5,0), ECTSmax decimal(5,2), SBUmax decimal(5,0), IsVrijstelling bit, IsBehaald int, DatumBehaald nvarchar(10), lvl int, path varchar(max) ) INSERT INTO @ReQ SELECT * FROM usf_GetRecursiveFoobar(@para1, @para2, @para3) I got error: Msg 8152, Level 16, State 13, Line 20 String or binary data would be truncated. The statement has been terminated. Why? What to do about it?

    Read the article

  • Need some serious help with self join issue.

    - by kralco626
    Well as you may know, you cannot index a view with a self join. Well actually even two joins of the same table, even if it's not technically a self join. A couple of guys from microsoft came up with a work around. But it's so complicated I don't understand it!!! The solution to the problem is here: http://jmkehayias.blogspot.com/2008/12/creating-indexed-view-with-self-join.html The view I want to apply this work around to is: create VIEW vw_lookup_test WITH SCHEMABINDING AS select count_big(*) as [count_all], awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm [owner], t.circt_cstdn_nm [tech], dvc.circt_nm, data_orgtn_yr from ((dbo.dvc join dbo.circt on dvc.circt_nm = circt.circt_nm) join dbo.circt_cstdn o on circt.circt_cstdn_user_id = o.circt_cstdn_user_id) join dbo.circt_cstdn t on dvc.circt_cstdn_user_id = t.circt_cstdn_user_id group by awc_txt, city_nm, str_nm, stru_no, o.circt_cstdn_nm, t.circt_cstdn_nm, dvc.circt_nm, data_orgtn_yr go Any help would be greatly apreciated!!! Thanks so much in advance!

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • Can a database function be called in the predicate of a llblgen query?

    - by Dan Appleyard
    I want to use a table-valued database function in the where clause of a query I am building using LLBLGen Pro 2.6 (self-servicing). SELECT * FROM [dbo].[Users] WHERE [dbo].[Users].[UserID] IN ( SELECT UserID FROM [dbo].[GetScopedUsers] (@ScopedUserID) ) I am looking into the FieldCompareSetPredicate class, but can't for the life of me figure out what the exact signature would be. Any help would be greatly appreciated.

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

< Previous Page | 674 675 676 677 678 679 680 681 682 683 684 685  | Next Page >