Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 481/1698 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • archiving table records to another table by trigger(move daialy table records to weekly table, evry

    - by sirvan
    I have written this trigger in mysql 5: create trigger changeToWeeklly after insert on tbl_daily for each row begin insert into tbl_weeklly SELECT * FROM vehicleslocation v where v.recivedate < curdate(); delete FROM tbl_daily where recivedate < curdate(); end; i want to archive records by date, move yesterday inserted record from dailly to weekly table and last weekly table to mounthly table and deletes this records from previous table this trigger has following error when insert in daily tabled occurred : "Can't update table 'tbl_daily' in stored function/trigger because it is already used by statement which invoked this stored function/trigger." please help me to solve th problem of archive old data in related tables: move yesterday inserted records to weekly table, if there is a reliable solution tell me please.

    Read the article

  • How can I speed up queries against tables I cannot add indexes to?

    - by RenderIn
    I access several tables remotely via DB Link. They are very normalized and the data in each is effective-dated. Of the millions of records in each table, only a subset of ~50k are current records. The tables are internally managed by a commercial product that will throw a huge fit if I add indexes or make alterations to its tables in any way. What are my options for speeding up access to these tables?

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • When NOT to use Cassandra?

    - by JimJim
    There has been a lot of talk related to Cassandra lately. Twitter, Digg, Facebook, etc all use it. When does it make sense to: use Cassandra, not use Cassandra, and use a RDMS instead of Cassandra.

    Read the article

  • how to effectively modify index

    - by daedlus
    Hej everyone, problem : I am looking for right way to convert an index from clustered to non-clustered Description : I have a table as below in sybase db: dbo.UserLog Id | UserId |time | .... This is hash partitioned using UserId. Currently it has 2 indexes UserId : non-clustered time: clustered This table has about 20 million records. I now want to make UserId as clustered index and time as non-clustered index. is it correct to user alter index to change from clustered to non-clustered or do i drop index and recreate. does the fact that userId is used in hash partitioning have any implications to this? To me alter seems way to go but I have not yet tried this.

    Read the article

  • is it possible to have an sqlite database in a sqlserver field?

    - by Behrooz
    I think my question seems to be vague. I am trying to save user settings in SQLServer, but the problem can be expressed in this term:"it needs 20 tables with circular dependencies. and i have enough tables to fill 3 database diagrams", so the best way encountered to my brain is to save it as a sqlite database in a field, like this: Index |Name |Data 1 |Behrooz |*sqlite database here* 2 |User1 |*sqlite database here* ... is this way the right way?is it stupid? should i create more tables instead of doing all these? does it increase database fragmention?

    Read the article

  • Importing json data into MySQL?

    - by AP257
    Pretty much what the title says :) At the moment I'm using Python to turn the json data into a plain-text tab-separated file, and then mysqlimport to pull that into my MySQL tables. Anyone know a nicer / more direct way?

    Read the article

  • ASP.NET MVC 2: How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. (from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() }).Distinct(); Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts.

    Read the article

  • What is best practice with SQLite and Android ?

    - by PHP_Jedi
    What is considered "best practice" when executing queries on a sql-lite db within an android app. Is it safe to run inserts, deletes and select queries from an AsyncTask's doInBackground ? Or should I use the UI Thread ? I suppose that db queries can be "heavy" and should not use the UI thread as it can lock up the app - resulting in an ANR. If I have several AsyncTasks, should they share a connection or should they open a connection each ? Any best practices in this area on android?

    Read the article

  • Versioned RDF store

    - by Mat
    Let me try rephrasing this: I am looking for a robust RDF store or library with the following features: Named graphs, or some other form of reification. Version tracking (probably at the named graph level). Privacy between groups of users, either at named graph or triple level. Human-readable data input and output, e.g. TriG parser and serialiser. I've played with Jena, Sesame, Boca, RDFLib, Redland and one or two others some time ago but each had its problems. Have any improved in the above areas recently? Can anything else do what I want, or is RDF not yet ready for prime-time? Reading around the subject a bit more, I've found that: Jena, nothing further Sesame, nothing further Boca does not appear to be maintained any more and seems only really designed for DB2. OpenAnzo, an open-source fork, appears more promising. RDFLib, nothing further Redland, nothing further Talis Platform appears to support changesets (wiki page and reference in Kniblet Tutorial Part 5) but it's a hosted-only service. Still may look into it though. SemVersion sounded promising, but appears to be stale.

    Read the article

  • How to resolve this very heavy query that slows down the application?

    - by Juan Paredes
    Hi, We have a web application running in a production enviroment and at some point the client complained about how slow the application got. When we checked what was going on with the application and the database we discover this "precious" query that was being executed by several users at the same time (thus inflicting a extremely high load on the database server): SELECT NULL AS table_cat, o.owner AS table_schem, o.object_name AS table_name, o.object_type AS table_type, NULL AS remarks FROM all_objects o WHERE o.owner LIKE :1 ESCAPE :"SYS_B_0" AND o.object_name LIKE :2 ESCAPE :"SYS_B_1" AND o.object_type IN(:"SYS_B_2", :"SYS_B_3") ORDER BY table_type, table_schem, table_name Our application does not execute this query, I believe it is an Hibernate internal query. I've found little information on why Hibernate does this extremely heavy query, so any help is very much appreciated! The production enviroment information: Red Hat Enterprise Linux 5.3 (Tikanga), JDK 1.5, web container OC4J (whitin Oracle Application Server), Oracle Database 10g Release 10.0.0.1, JDBC Driver for JDK 1.2 and 1.3, Hibernate version 3.2.6.ga. Thank you.

    Read the article

  • mysql - select pagination chunk that includes a given id

    - by sean smith
    ...before the pagination chunks have been determined. I know you can do this in multiple statements, but there must be a better way. my results are returned ordered by date and I want to return the pagination chunk that contains a given id. So I could, for example, select the date of the given id and then select a chunk of results where date is less than or greater than the date. That would work. But is there some native mysql method of doing this sort of thing in one statement? It just seems reasonable to expect that we could ask for X results in which a given id exists if results are ordered by date.

    Read the article

  • DbDataReader with DbTransactions

    - by Gustavo Paulillo
    Its the wrong way or lack of performance, using DbDataReader combinated with DbTransactions? An example of code: public DbDataReader ExecuteReader() { try { if (this._baseConnection.State == ConnectionState.Closed) this._baseConnection.Open(); if (this._baseCommand.Transaction != null) return this._baseCommand.ExecuteReader(); return this._baseCommand.ExecuteReader(CommandBehavior.CloseConnection); } catch (Exception excp) { if (this._baseCommand.Transaction != null) this._baseCommand.Transaction.Rollback(); this._baseCommand.CommandText = string.Empty; this._baseConnection.Close(); throw new Exception(excp.Message); } } Some methods call this operation. Sometimes openning a DbTransaction. Its using DbConnection and DbCommand. The real problem, is in production enviroment (like 5,000 access/day) the ADO operations start throwing exceptions

    Read the article

  • Django models: Use multiple values as a key?

    - by Rosarch
    Here is a simple model: class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) Instead of Django creating a default primary key, I would like to use both course and term as the primary key - taken together, they uniquely identify a tuple. Is this allowed by Django? On a related note: I am trying to represent users taking courses in certain terms. Is there a better way to do this? class Course(models.Model): name = models.CharField(max_length=200) requiredFor = models.ManyToManyField(RequirementSubSet, blank=True) offeringSchool = models.ForeignKey(School) def __unicode__(self): return "%s at %s" % (self.name, self.offeringSchool) class MyUser(models.Model): user = models.ForeignKey(User, unique=True) takingReqSets = models.ManyToManyField(RequirementSet, blank=True) takingTerms = models.ManyToManyField(Term, blank=True) takingCourses = models.ManyToManyField(TakingCourse, blank=True) school = models.ForeignKey(School) class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) class Term(models.Model): school = models.ForeignKey(School) isPrimaryTerm = models.BooleanField()

    Read the article

  • Why is Magento so slow?

    - by mr-euro
    Is Magento usually so terrible slow? This is my first experience with it and the admin panel simply takes ages to load and save changes. It is a default installation with the test data. The server it is hosted on serves other non-Magento sites super fast. What is it about the PHP code that Magento uses that makes it so slow, and what can be done to fix it?

    Read the article

  • adding one time options to items

    - by rap-uvic
    Hello, I'm building an Event Registration site. For any given event, we'll have a handful of items to choose from. I have a table for these items. For each event we might have special options for users. For example, for one of the events new users get to buy an item which is not available to other users. This may not apply to all the events. For other events we might have some other restriction on items. I will obviously be checking this programmatically on application side. I would like to though, set up a column containing flag in the items table. But I don't find it feasible because this condition may only apply to one particular event. I don't want all the future items to have this column. What is a good approach to take in such a situation? Should I create a special "restrictions" table and just do a join? How would I handle this on the application side?

    Read the article

  • does it makes sense to use int instead of char or nvarchar for a discriminator column if I'm using i

    - by Omu
    I have something like this: create table account ( id int identity(1,1) primary key, usertype char(1) check(usertype in ('a', 'b')) not null, unique(id, usertype) ) create table auser ( id int primary key, usertype char(1) check(usertype = 'a') not null, foreign key (id, usertype) references account(id, usertype) ) create table buser ( ... same just with b ) the question is: if I'm going to use int instead of char(1), does it going to work faster/better ?

    Read the article

  • Why isn't "String or Binary data would be truncated" a more descriptive error?

    - by rwmnau
    To start: I understand what this error means - I'm not attempting to resolve an instance of it. This error is notoriously difficult to troubleshoot, because if you get it inserting a million rows into a table 100 columns wide, there's virtually no way to determine what column of what row is causing the error - you have to modify your process to insert one row at a time, and then see which one fails. That's a pain, to put it mildly. Is there any reason that the error doesn't look more like this? String or Binary data would be truncated Error inserting value "Some 18 char value" into SomeTable.SomeColumn VARCHAR(10) That would make it a lot easier to find and correct the value, if not the table structure itself. If seeing the table data is a security concern, then maybe something generic, like giving the length of the attempted value and the name of the failing column?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >