Search Results

Search found 37457 results on 1499 pages for 'sql 2008 r2'.

Page 873/1499 | < Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >

  • WHERE vs HAVING

    - by baloo
    Why is it that you need to place columns you create yourself (for example "select 1 as number") after HAVING and not WHERE in MySQL? And are there any downsides instead of doing "WHERE 1" (writing the whole definition instead of a column name)

    Read the article

  • How to limit select items with L2E/S?

    - by orlon
    This code is a no-go var errors = (from error in db.ELMAH_Error select new { error.Application, error.Host, error.Type, error.Source, error.Message, error.User, error.StatusCode, error.TimeUtc }).ToList(); return View(errors); as it results in a 'requires a model of type IEnumerable' error. The following code of course works fine, but selects all the columns, some of which I'm simply not interested in: var errors = (from error in db.ELMAH_Error select error).ToList(); return View(errors); I'm brand spanking new to MVC2 + L2E, so maybe I'm just not thinking in the right mindset yet, but this seems counter-intuitive. Is there an easy way to select a limited number of columns, or is this just part of using an ORM?

    Read the article

  • mySQL: Order by field size/length

    - by Sadi
    Here is a table structure (e.g. test): __________________________________________ | Field Name | Data Type | |________________|_________________________| | id | BIGINT (20) | |________________|_________________________| | title | varchar(25) | |________________|_________________________| | description | text | |________________|_________________________| A query like: SELECT * FROM TEST ORDER BY description; But I would like to order by the field size/length of the field description. The field type will be TEXT or BLOB.

    Read the article

  • Access is re-writing - and breaking - my query!

    - by FrustratedWithFormsDesigner
    I have a query in MS Access (2003) that makes use of a subquery. The subquery part looks like this: ...FROM (SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123") AS q1,... And when I switch to Table View to verify the results, all is OK. Then, I wanted the result of this query to be printed on the page header for a report (the query returns a single row that is page-header stuff). I get an error because the query is suddenly re-written as: ...FROM [SELECT id, dt, details FROM all_recs WHERE def_cd="ABC-00123"; ] AS q1,... So it's Ok that the round brackets are automatically replaced by square brackets, Access feels it needs to do that, fine! But why is it adding the ; into the subquery, which causes it to fail? I suppose I could just create new query objects for these subqueries, but it seems a little silly that I should have to do that.

    Read the article

  • SSRS 2005 Keep textbox and textfield together when page break occurs

    - by EKet
    Problem I don't have a details row or anything. I have simply a body and I dragged on textboxes for labeling textfields from my dataset. The problem is when one of the fields has too much data for the current page, it "page-breaks" at the start of the field leaving the textbox (label for the field) behind on the previous page. What I've tried Put the data field and the textbox label a) inside a rectangle - didn't work b) inside a list and the list inside a rectangle - didn't work c) inside a list with keep together property set to TRUE or FALSE - didn't work Question How would I group the textbox and the textfield so that regardless of where the pagebreak happens it includes its label?

    Read the article

  • Does this query fetch unnecessary information? Should I change the query?

    - by Camran
    I have this classifieds website, and I have about 7 tables in MySql where all data is stored. I have one main table, called "classifieds". In the classifieds table, there is a column called classified_id. This is not the PK, or a key whatsoever. It is just a number which is used for me to JOIN table records together. Ex: classifieds table: fordon table: id => 33 id => 12 classified_id => 10 classified_id => 10 ad_id => 'bmw_m3_92923' This above is linked together by the classified_id column. Now to the Q, I use this method to fetch all records WHERE the column ad_id matches any of the values inside an array, called in this case $ad_arr: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id WHERE mt.ad_id IN ('$ad_arr')"; Is this good or would this actually fetch unnecessary information? Check out this Q I posted couple of days ago. In the comments HLGEM is commenting that it is wrong etc etc. What do you think? http://stackoverflow.com/questions/2782275/another-rookie-question-how-to-implement-count-here Thanks

    Read the article

  • Aggregate Functions in Index with IBMDB2

    - by Erkan
    Is there any way to pre aggregate results of aggregate functionts (f.i. count()) and store it in an index? The background is: i want to speed up count() queries. So that: Select count(users) from TE123 where region = 'A'; would be supported by an index like Region Count(Users) A 548 E 458 I know that MQTs would also help for this problem. However, in this case it is not possible to use MQT, as we use kind of an ORM and we don't want to define Entities on MQTs. I just slightly remember - one DBA told me - that there is such a function planned for DB2 V10.

    Read the article

  • clearing an entire column in access

    - by I__
    is there a way to clear an entire column in a datasheet in access? i can just right click on it and delete it but that will affect the structure, i just need to clear all the records. how do i do this? perhaps the question i should be asking is how do i clear the entire contents of a datasheet in access?

    Read the article

  • MySQL whats wrong with my foreign keys?

    - by Skiy
    Hello, what is wrong with the two foreign keys which I have marked with comments? create database db; use db; create table Flug( Flugbez varchar(20), FDatum Date, Ziel varchar(20), Flugzeit int, Entfernung int, Primary Key (Flugbez, FDatum)); create table Flugzeugtyp( Typ varchar(20), Hersteller varchar(20), SitzAnzahl int, Reisegeschw int, primary key (Typ) ); create table flugzeug( Typ varchar(20), SerienNr int, AnschDatum Date, FlugStd int, primary key(Typ,SerienNr), foreign key(Typ) references Flugzeugtyp(Typ)); create table Abflug( Flugbez varchar(20), FDatum Date, Typ varchar(20), Seriennr int, Kaptaen varchar(20), Primary key(Flugbez,FDatum,Typ,SerienNr), Foreign key(Flugbez) references Flug(Flugbez), -- Foreign key(FDatum) references Flug(FDatum), Foreign key(Typ) references Flugzeugtyp(Typ) -- ,Foreign key(SerienNr) references Flugzeug(SerienNr) ); When I uncomment these, I get: ERROR 1005 (HY000): Can't create table 'db.abflug' (errno: 150)

    Read the article

  • Why is Postgres doing a Hash in this query?

    - by Claudiu
    I have two tables: A and P. I want to get information out of all rows in A whose id is in a temporary table I created, tmp_ids. However, there is additional information about A in the P table, foo, and I want to get this info as well. I have the following query: SELECT A.H_id AS hid, A.id AS aid, P.foo, A.pos, A.size FROM tmp_ids, P, A WHERE tmp_ids.id = A.H_id AND P.id = A.P_id I noticed it going slowly, and when I asked Postgres to explain, I noticed that it combines tmp_ids with an index on A I created for H_id with a nested loop. However, it hashes all of P before doing a Hash join with the result of the first merge. P is quite large and I think this is what's taking all the time. Why would it create a hash there? P.id is P's primary key, and A.P_id has an index of its own.

    Read the article

  • Showing multiple models in a single ListView

    - by Veer
    I've three models (Contacts, Notes, Reminders). I want to search all these and produce the filtered result in a single listview and depending upon the selection I've to display the corresponding view(UserControl) to its right. I want the right way of implementing the design or atleast alternatives to this method that I've tried. Now I've tried it using a IntegratedViewModel having all the properties from all the three models. public class IntegratedViewModel { ContactModel _contactModel; NoteModel _noteModel; public IntegratedViewModel(ContactModel contactModel) { _contactModel = contactModel; } // similarly for other models also public string DisplayTitle // For displaying in ListView { get; //same as set set { If(_contactModel != null) return _contactModel.Name; If(_noteModel != null) return _noteModel.Title; } } // All other properties from the three models includin the Name/Title properties for displaying them in the corresponding views(UserControl) } Now I set the itemsSource as the List<IntegratedViewModel>. I've to now bind the visibility of the views to some properties in the MainViewModel. I tried setting bool properties like IsContactViewSelected, IsNoteViewSelected using the setter of SelectedEntity property which is bound to the ListView's SelectedItem. public SelectedEntity { //get set { oldvalue = _selectedEntity; _selectedEntity = value; // now i find the Type of model selected using oldvalue.ModelType // where ModelType is a property in the IntegratedViewModel // according to the type, i set one of the above bool properties to false // and do the same for _selectedEntity but set the property to true // so that the view corresponding to the selectedEntityType is visible // and others are collapsed } } Here is the problem: For eg: let us say, I selected an item of type ContactModel, the old selection being NoteModel. I set the property IsNoteModelSelected to false according to the oldvalue, it sets the property and then Raises the propertychanged event and does not go and check the remaining if condition where i check for _selectedEntity which is used to set the IsContactModelSelected to true.

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • How to add user customized data to database?

    - by CSharperWithJava
    I am trying to design a sqlite database that will store notes. Each of these notes will have common fields like title, due date, details, priority, and completed. In addition though, I would like to add data for more specialized notes like price for shopping list items and author/publisher data for books. I also want to have a few general purpose fields that users can fill with whatever text data they want. How can I design my database table in this case? I could just have a field for each piece of data for every note, but that would waste a lot of fields and I'd like to have other options and suggestions.

    Read the article

  • Returning a recordcount from a subquery in a result set.

    - by KeRiCr
    I am attempting to return a rowcount from a subquery as part of a result set. Here is a sample that I've tried that didn't work: SELECT recordID , GroupIdentifier , count() AS total , (SELECT COUNT() FROM table WHERE intActingAsBoolean = 1) AS Approved FROM table WHERE date_format(Datevalue, '%Y%m%d') BETWEEN 'startDate' AND 'endDate' GROUP BY groupIdentifier What I'm attempting to return for 'Approved' is the number of records for the grouped value where intActingAsBoolean = 1. I have also tried modifying the where clause by giving the main query a table alias and applying an AND clause to match the groupidentifier in the subquery to the main query. None of these are returning the correct results. The query as written returns all records in the table where intActingAsBoolean = 1. This query is being run against a MySQL database.

    Read the article

  • Group and count in Rails

    - by alamodey
    I have this bit of code and I get an empty object. @results = PollRoles.find( :all, :select => 'option_id, count(*) count', :group => 'option_id', :conditions => ["poll_id = ?", @poll.id]) Is this the correct way of writing the query? I want a collection of records that have an option id and the number of times that option id is found in the PollRoles model. EDIT: This is how I''m iterating through the results: <% @results.each do |result| %> <% @option = Option.find_by_id(result.option_id) %> <%= @option.question %> <%= result.count %> <% end %>

    Read the article

  • SQL Join query, getting ManagerName

    - by user279521
    I have an tblEmployeeProfile & a tblPersonnel. tblPersonnel is an HR table, that consists of all employees in the company; tblEmployeeProfile contains details about an employee's position. tblPersonnel.PersonnelID tblPersonnel.FirstName tblPersonnel.MiddleName tblPersonnel.LastName tblPersonnel.PhoneNumber tblPersonnel.Email tblEmployeeProfile.EmployeeID tblEmployeeProfile.ManagerID tblEmployeeProfile.DepartmentID tblEmployeeProfile.JobCategoryID tblEmployeeProfile.SalaryID I want to return a record with the following fields: EmployeeID, FirstName, MiddleName, LastName, Email, ManagerFullName where EmployeeID = @EmployeeID *tblEmployeeProfile.ManagerID = tblPersonnel.PersonnelID* I can't seem to get the query correct for getting the ManagerFullName

    Read the article

  • Store database, good pattern for simultaneous access

    - by dygi
    I am kinda new to database designing so i ask for some advices or some kind of a good pattern. The situation is that, there is one database, few tables and many users. How should i design the database, or / and which types of queries should i use, to make it work, if users can interact with the database simultaneously? I mean, they have access to and can change the same set of data. I was thinking about transactions, but I am not sure, if that is the right / good / the only solution. I will appreciate some google keywords too.

    Read the article

  • Tricky SQL query - need to get time frames

    - by Andrew
    I am stumbled upon a problem, when I need a query which will produce a list of speeding time frames. Here is the data example [idgps_unit_location] [dt] [idgps_unit] [lat] [long] [speed_kmh] 26 10/18/2012 18:53 2 47 56 30 27 10/18/2012 18:53 2 49 58 31 28 10/18/2012 18:53 2 28 37 15 29 10/18/2012 18:54 2 56 65 33 30 10/18/2012 18:54 2 152 161 73 31 10/18/2012 18:55 2 134 143 64 32 10/18/2012 18:56 2 22 31 12 36 10/18/2012 18:59 2 98 107 47 37 10/18/2012 18:59 2 122 131 58 38 10/18/2012 18:59 2 91 100 44 39 10/18/2012 19:00 2 190 199 98 40 10/18/2012 19:01 2 194 203 101 41 10/18/2012 19:02 2 182 191 91 42 10/18/2012 19:03 2 162 171 78 43 10/18/2012 19:03 2 174 183 83 44 10/18/2012 19:04 2 170 179 81 45 10/18/2012 19:05 2 189 198 97 46 10/18/2012 19:06 2 20 29 10 47 10/18/2012 19:07 2 158 167 76 48 10/18/2012 19:08 2 135 144 64 49 10/18/2012 19:08 2 166 175 79 50 10/18/2012 19:09 2 9 18 5 51 10/18/2012 19:09 2 101 110 48 52 10/18/2012 19:09 2 10 19 7 53 10/18/2012 19:10 2 32 41 20 54 10/18/2012 19:10 1 54 63 85 55 10/19/2012 19:11 2 55 64 50 I need a query that would convert this table into the following report that shows frames of time when speed was 80: [idgps_unit] [dt_start] [lat_start] [long_start] [speed_start] [dt_end] [lat_end] [long_end] [speed_end] [speed_average] 2 10/18/2012 19:00 190 199 98 10/18/2012 19:02 182 191 91 96.66666667 2 10/18/2012 19:03 174 183 83 10/18/2012 19:05 189 198 97 87 1 10/18/2012 19:10 54 63 85 10/18/2012 19:10 54 63 85 85 Now, what have I tried? I tried putting this into separate tables, queries and do some joins... Nothing works and I am very frustrated... I am not even sure if this could be done via the query. Asking for the expert help!

    Read the article

< Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >