Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 629/1278 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Django query: Count and Group BY

    - by Tyler Lane
    I have a query that I'm trying to figure the "django" way of doing it: I want to take the last 100 calls from Call. Which is easy: calls = Call.objects.all().order_by('-call_time')[:100] However the next part I can't find the way to do it via django's ORM. I want to get a list of the call_types and the number of calls each one has WITHIN that previous queryset i just did. Normally i would do a query like this: "SELECT COUNT(id),calltype FROM call WHERE id IN ( SELECT id FROM call ORDER BY call_time DESC LIMIT 100 ) GROUP BY calltype;" I can't seem to find the django way of doing this particular query. Here are my 2 models: class Call( models.Model ): call_time = models.DateTimeField( "Call Time", auto_now = False, auto_now_add = False ) description = models.CharField( max_length = 150 ) response = models.CharField( max_length = 50 ) event_num = models.CharField( max_length = 20 ) report_num = models.CharField( max_length = 20 ) address = models.CharField( max_length = 150 ) zip_code = models.CharField( max_length = 10 ) geom = models.PointField(srid=4326) calltype = models.ForeignKey(CallType) objects = models.GeoManager() class CallType( models.Model ): name = models.CharField( max_length = 50 ) description = models.CharField( max_length = 150 ) active = models.BooleanField() time_init = models.DateTimeField( "Date Added", auto_now = False, auto_now_add = True ) objects = models.Manager()

    Read the article

  • Postgres update rule returning number of rows affected

    - by Lithium
    I have a view table which is a union of two seperate tables (say Table _A and Table _B). I need to be able to update a row in the view table, and it seems the way to do this was through a 'view rule'. All entries in the view table have seperate id's, so an id that exists in table _A won't exist in table _B. I created the following rule: CREATE OR REPLACE RULE view_update AS ON UPDATE TO viewtable DO INSTEAD ( UPDATE _A SET foo = false WHERE old.id = _A.id; UPDATE _B SET foo = false WHERE old.id = _B.id; ); If I do an update on table _B it returns the correct number of rows affected (1). However if I update table _A it returns (0) rows affected even though the data was changed. If I swap out the order of the updates then the same thing happens, but in reverse. How can I solve this problem so that it returns the correct number of rows affected. Thanks.

    Read the article

  • Creating stored procedure having different WHERE clause on different search criteria without putting

    - by Muhammad Kashif Nadeem
    Is there any alternate way to create stored procedure without putting all query in one long string if criteria of WWHERE clause can be different. Suppose I have Orders table I want to create stored procedure on this table and there are three column on which I wnat to filter records. 1- CustomerId, 2- SupplierId, 3- ProductId. If user only give CustomerId in search criteria then query should be like following SELECT * FROM Orders WHERE Orders.CustomerId = @customerId And if user only give ProductId in search criteria then query should be like following SELECT * FROM Orders WHERE Orders.ProductId = @productId And if user only all three CustomerId, ProductId, and SupplierId is given then all three Ids will be used in WHERE to filter. There is also chance that user don't want to filter record then query should be like following SELCT * FROM Orders Whenever I have to create this kind of procedure I put all this in string and use IF conditions to check if arguments (@customeId or @supplierId etc) has values. I use following method to create procedure DECLARE @query VARCHAR(MAX) DECLARE @queryWhere VARCHAR(MAX) SET @query = @query + 'SELECT * FROM Orders ' IF (@originationNumber IS NOT NULL) BEGIN BEGIN SET @queryWhere =@queryWhere + ' Orders.CustomerId = ' + CONVERT(VARCHAR(100),@customerId) END END IF(@queryWhere <> '') BEGIN SET @query = @query+' WHERE ' + @queryWhere END EXEC (@query) Thanks.

    Read the article

  • How do I create a check constraint?

    - by Zack Peterson
    Please imagine this small database... Diagram Tables Volunteer Event Shift EventVolunteer ========= ===== ===== ============== Id Id Id EventId Name Name EventId VolunteerId Email Location VolunteerId Phone Day Description Comment Description Start End Associations Volunteers may sign up for multiple events. Events may be staffed by multiple volunteers. An event may have multiple shifts. A shift belongs to only a single event. A shift may be staffed by only a single volunteer. A volunteer may staff multiple shifts. Check Constraints Can I create a check constraint to enforce that no shift is staffed by a volunteer that's not signed up for that shift's event? Can I create a check constraint to enforce that two overlapping shifts are never staffed by the same volunteer?

    Read the article

  • How can we secure our data from DBA?

    - by KoolKabin
    Hi guys, I have very confidential data in my database. I am trying to secure my data from dba. I am a member of development team. We develop our software and delpoy in a server which has its own dba. We have limited control over the server. In this scenario how can i deny dba of the server to lookup my data and deny making changes to them. Is it possible?

    Read the article

  • Wiki Database, is there one?

    - by Faiz
    I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this? I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.

    Read the article

  • SSIS(sql server integration service) xml data flow

    - by swapna
    Hi, I have an xml file the content which i have to write to a Database table using ssis pacakge. I am using xml source nad oledb destination My issue now is this xml file generate multiple outputs .(event,produt,offer,form) etc. But i need to write all in one data row(more than one if 2 products are there for the event) in the database. But i do not know how to use this multiple outputs and make a single row for a event. I hav read numerous articles about this subject but not able to take a decision.what is the right way of doing this. 1) xml source ? (if i use this how do i merge the multiple outputs) 2) or a script task using xml objects read and write to the DB. or anything new ? Please provide me some solutions xml sample file * - ABc. 2009-06-07 2010-04-30 region test 1 contact - offertest product1 product1 187 * Thanks SNA

    Read the article

  • javax.sql.DataSource.getConnection() locks system

    - by Ryan Elkins
    I'm using the Apache Commons DBCP library for connection pooling in a desktop application. I've done this before and never had a problem but the latest application has started sometimes locking up on the call to getConnection() on my DataSource. The application just hangs after that call. I'm closing up my resources when I'm done with them. Is there any known reason why this might happen? I'm not even sure where to being troubleshooting this now that I've got it narrowed down to this method. It doesn't always hang - sometimes it happens fairly quickly, sometimes it takes a long time. Sometimes it doesn't happen at all, although lately I can get it to happen within a few minutes.

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • Integer comparison as string

    - by J Pollack
    Hi I have an integer column and I want to find numbers that start with specific digits. For example they do match if I look for '123': 1234567 123456 1234 They do not match: 23456 112345 0123445 Is the only way to handle the task by converting the Integers into Strings before doing string comparison? Also I am using Postgre regexp_replace(text, pattern, replacement) on numbers which is very slow and inefficient way doing it. The case is that I have large amount of data to handle this way and I am looking for the most economical way doing this.

    Read the article

  • Adding miliseconds to a datetime in tsql INSERT INTO

    - by DavRob60
    I'm doing a INSERT INTO query in order to initialize a new table. The primary key is RFQ_ID and Action_Time How could add 1 milisecond to each Action_Time on new record in order to avoid "Violation of PRIMARY KEY constraint" INSERT INTO QSW_RFQ_Log (RFQ_ID, Action_Time, Quote_ID) SELECT RFQ_ID , GETDATE() AS Action_Time , Quote_ID , 'Added to RFQ on Initialization' AS Note FROM QSW_Quote

    Read the article

  • MsSQL 2005 query performance

    - by Max
    I have the following query: select ............. from //one table and about 20 left joins// where ( ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) ) order by this_.id asc And I have two environment: One with 175 000 records in table "this_" and second with 25 000 records in table "this_". This query works right on 175k database and it works smth about 2 seconds, but on base with 25k this query freezes. and if drop one the folloing item from where clause: ( this_.driverName like 'blah*' or this_.renterName like 'blah*' ) or exists ( select this0__.id as y0_ from ThirdParty this0__ where this0__.name like 'blah*' and this0__.claim_id=this_.id ) query runs normally. How can I to increase performance of this query?

    Read the article

  • Failed to Kill Process in SQL 2008

    - by Andrea.Ko
    I have a process with the following information, and i execute the kill process to kill this id, and it return me "Only user processes can be killed." SPID:11 Status:BACKGROUND Login:sa HostName: . BlkBy: . DBName: SAFEMIG Command:CHECKPOINT Normally, all the session to login to this server, it should have a HostName which display our PC name, but this connection is with a dot, so not sure who is executing what process that have this connection. I execute "dbcc inputbuffer(11)" It return me"EventType= No Event, Parameters = 0 and EventInfo=Null" Appreciate for any help\advice on this problem!

    Read the article

  • Ajax Content Loading(Processing) image or indicator

    - by Arny
    Hi there, in part of my web page, I have couple of asp:image Thumbnails, onclick I use ajax modal popup extender to show the imgae in full size which are working fine, what I need to add is to have a processing image or indicator both in thumbnail and modal popup extender, I also have ajax autocomplete that is working fine, I need to add some indicator or processing image to it as soon as user start typing a word. any idea? Thanks in advance

    Read the article

  • ASP.Net Custom Field From Query In DataSet

    - by boruchsiper
    I added a new query to a table adapter in a DataSet. This query adds another field to the query whcih is a sum from another table. Here is the full query: SELECT (SELECT COUNT(donationID) AS Expr1 FROM Donations AS da WHERE (dn.donorID = donorID)) AS Count, Solicitor, address1, address2, city, companyName, country, donorID, email, first, last, phoneHome, phoneMobile, phoneWork, state, webURL, zip, (select sum(amount) from Donations as dna where dna.donorID = dn.donorID) as SumDonations FROM Donors AS dn order by last The new field is represented in the last part of the query: (select sum(amount) from Donations as dna where dna.donorID = dn.donorID) as SumDonations I can preview the data in the xsd but the last field "SumDonations" is not showing up as a field I can add to my gridview. I rebuilt the website but no luck. What am I missing?

    Read the article

  • Nullable Integer ? (working with linq)

    - by nCdy
    I've got exception about convert NULL to Int32. I've got a table from database with nullable tinyint [Column(Storage="_StatType", DbType="tinyint NULL")] public StatType : int { get { _StatType; } } (to get C# code just replace variable's type) and after making linq select def StartLinq = linq <#from lpi in _CfgListParIzm where lpi.ID_ListParIzm==drr1 select (lpi.StatType) #> ; StartLinq.ToArray()[0] can't be readed if that is null :-/ mutable STT : int = 0; try { _=int.TryParse(StartLinq.ToArray()[0].ToString(), out STT); } catch { | _ is Exception => () /* I don't care*/ } upper code is very poor trick :( I wont use it.

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • Table Partitioning

    - by Ankur Gahlot
    How advantageous is it to use partitioning of tables as compared to normal approach ? Is there a sort of sample case or detailed comparative analysis that could statistically ( i know this is too strong a word, but it would really help if it is illustrated by some numbers ) emphasize on the utility of the process. Thanks, Ankur

    Read the article

  • How to get the last element by date of each "type" in LINQ or TSQL

    - by Mauro
    Imagine to have a table defined as CREATE TABLE [dbo].[Price]( [ID] [int] NOT NULL, [StartDate] [datetime] NOT NULL, [Price] [int] NOT NULL ) where ID is the identifier of an action having a certain Price. This price can be updated if necessary by adding a new line with the same ID, different Price, and a more recent date. So with a set of a data like ID StartDate Price 1 01/01/2009 10 1 01/01/2010 20 2 01/01/2009 10 2 01/01/2010 20 How to obtain a set like the following? 1 01/01/2010 20 2 01/01/2010 20

    Read the article

  • LINQ: Create persistable Associations in Code, Without Foreign Key

    - by Alex
    Hello, I know that I can create LINQ Associations without a Foreign Key. The problem is, I've been doing this by adding the [Association] attribute in the DBML file (same as through the designer), which will get erased again after I refresh my database (and reload the entire table structure). I know that there is the MyData.cs file (as part of the DBML) in which I can place my partial extensions etc. to domain objects (to persist even after I refresh the DBML), but I don't know how to create an association there?

    Read the article

  • Self-referencing tables in Linq2Sql

    - by J-Man
    Hi, I've seen a lot of questions on self-referencing tables in Linq2Sql and how to eagerly load all child records for a particular root object. I've implemented a temporary solution by accessing all underlying properties, but you can see that this doesn't do the performance any good. The thing is though, that all records are correlated with each-other using a correlation GUID. Example below: RootElement - Id: 1 - ParentId: null - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 2 - ParentId: 1 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement2 - Id: 3 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 4 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD In my case, I do have access to the correlationId, so I can retrieve all of my records by performing the following query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' select element; But, of course, I want these elements associated with each other by executing this query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' && element.ParentId == null select element; My question is: is it possible to combine the results the first query as some sort of 'caching mechanism' for the query where I get the root element? Thanks for the input. J.

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >