Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 170/1506 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Best way to handle MySQL date for performance with thousands of users

    - by bitLost
    I am currently part of a team designing a site that will potentially have thousands of users who will be doing a number of date related searches. During the design phase we have been trying to determine which makes more sense for performance optimization. Should we store the datetime field as a mysql datetime. Or should be break it up into a number of fields (year, month, day, hour, minute, ...) The question is with a large data set and a potentially large set of users, would we gain performance wise breaking the datetime into multiple fields and saving on relying on mysql date functions? Or is mysql already optimized for this?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Login failed for user ''. The user is not associated with a trusted SQL Server connection

    - by Tony_Henrich
    My web service app on my Windows XP box is trying to log in to my sql server 2005 database on the same box. The machine is part of a domain. I am logged in in the domain and I am an admin on my machine. I am using Windows Authentication in my connection string as in "Server=myServerAddress;Database=myDataBase;Trusted_Connection=True". SQLServer is configured for both types of authentication (mixed mode) and accepts remote connections and accepts tcp and named pipes protocols. Integrated authentication is enabled in IIS and with and without anonymous access. 'Everyone' has access to computer from network setting in local security settings. ASPNET is a user in the sql server and has access to the daatabase. user is mapped to the login. The app works fine for other developers which means the app shouldn't be changed (It's not new code). So it seems it's my machine which has an issue. I am getting the error "Login failed for user ''. The user is not associated with a trusted SQL Server connection" Note the blank user name. Why am I getting this error when both the app and database are on my machine? I can use SQL Server authentication but don't want to. I can connect to the database using SSMS and my Windows credentials. It might be related to setspn, kerberos, delegation, AD. I am not sure what further checks to make?

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • PHP include(): File size & performance

    - by Tom
    An inexperienced PHP question: I've got a PHP script file that I need to include on different pages lots of times in lots of places. I have the option of either breaking the included file down into several smaller files and include these on a as-needed basis... OR ... I could just keep it all together in a single PHP file. I'm wondering if there's any performance impact of using a larger vs. smaller file for include() in this context? For example, is there any performance difference between a 200KB file and a 20KB file? Thank you.

    Read the article

  • Trying to exlcude VTI* paths in MSIDXS sql select statement

    - by Catdirt
    Hi, using a c# asp.net page, building a SQL query string to run against an index server catalog: string SQL = "SELECT doctitle, vpath, Path, Write, Size, Rank "; SQL += "FROM \"" + strCatalog + "\"..SCOPE() "; SQL += "WHERE"; SQL += " CONTAINS(Contents, '" + strQP + "') "; SQL += "AND NOT CONTAINS(Path, '\"_vti_\"') "; SQL += "AND NOT CONTAINS(FileName, '\".ascx\" OR \".config\" OR \".css\"') "; SQL += "ORDER BY Rank DESC"; Seems to work fine, except it will return results in the _vti_ directories which I am trying to avoid. Edit: All but asked question, so to be technical: How can I search the index and have it not return files from the vti folders? Switching to use an ixsso query object is possible, but i'd rather avoid it for this particular instance.

    Read the article

  • SQL Server 2012 RTM Cumulative Update #10 is available!

    - by AaronBertrand
    The SQL Server team has released CU #10 for SQL Server 2012 RTM. KB article: http://support.microsoft.com/kb/2891666 Build # is 11.0.2420 This build has 4 fixes For most customers, this cumulative update hardly justifies the download, never mind patching and regression testing, at least IMHO. Of the four fixes, two involve SSAS, one involves SSRS, and one involves the database engine tuning advisor. Relevant for builds 11.0.2100 -> 11.0.2419. Do not attempt to install on SQL Server 2012 SP1 (any...(read more)

    Read the article

  • SQL Server 2012 Service Pack 1 Cumulative Update #11 is available!

    - by AaronBertrand
    The SQL Server team has released SQL Server 2012 SP1 Cumulative Update #11. This cumulative update includes a fix for the online index rebuild corruption issue I discussed recently on SQLPerformance.com . KB Article: KB #2975396 Build # is 11.0.3449 Currently there are 32 public fixes listed (32 total) Relevant for builds 11.0.3000 -> 11.0.3448. Do not attempt to install on SQL Server 2012 RTM (any build < 11.0.3000) or any other major version. If you are on a different branch, see this blog...(read more)

    Read the article

  • T-SQL Tuesday: What kind of Bookmark are you using?

    - by Kalen Delaney
    I’m glad there is no minimum length requirement for T-SQL Tuesday blog posts , because this one will be short. I was in the classroom for almost 11 hours today, and I need to be back tomorrow morning at 7:30. Way long ago, back in SQL 2000 (or was it earlier?) when a query indicated that SQL Server was going to use a nonclustered index to get row pointers, and then look up those rows in the underlying table, the plan just had a very linear look to it. The operator that indicated going from the nonclustered...(read more)

    Read the article

  • Intermittent Connection Issues to SQL Server 2008 R2 RTM

    - by Chandan Jha
    The problem I am facing is a very complex one and inspite of trying to gather a root cause of the problem, I am standing at the same place after 2 months with just bits and pieces of information.Here is a scenario: There is a windows 2003 server which uses an system DSN ODBC connection. I looked into the driver properties and it is as follows: Name Version File SQL Server 2000.86.3959.00 SQLSRV32.DLL Now, this system DSN has been given configured with TCP\IP in Network Libraries and 'determine port dynamically' is checked. Now, lets come to the database destination. It is hosted on Windows 2008 having SQL 2008 R2 RTM version 64-bit. Now, I will give you a an overview about the events that happen and whatever troubleshooting I could perform: I get an email saying 'blah blah' failed and the only message their application gets is 'cannot connect to database' I go the SQL Server logs and find the following information: Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.0.0.xx Error: 18456, Severity: 14, State: 58.] A quick search shows that this error may come when an SQL Server is configured with windows authentication but its not true. We have mixed mode and connection issue is intermittent. This SQL Server is configured to run on a local system account but since we use only SQL Server accounts to connect to this, there should not be any Kerberos errors. When I run a profiler trace and see only 'existing connections', i see a lot of them coming from my client server displaying the sql user but NO hostname is shown. Textdata field shows TCP\IP information along with some arithabort and ansi-null settings. Now, I tried looking into ring connectivity buffer by using following: SELECTCAST(record AS XML) FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = 'RING_BUFFER_CONNECTIVITY' One sample output is: <ConnectivityTraceRecord> <RecordType>Error</RecordType> <RecordSource>Tds</RecordSource> <Spid>118</Spid> <SniConnectionId>5124905D-D1EC-460E-AD78-201050B78C67</SniConnectionId> <OSError>0</OSError> <SniConsumerError>18452</SniConsumerError> <SniProvider>7</SniProvider> <State>1</State> <RemoteHost>10.0.0.21</RemoteHost> <RemotePort>5008</RemotePort> <LocalHost>10.1.0.38</LocalHost> <LocalPort>1433</LocalPort> <RecordTime>6/6/2012 21:14:57.527</RecordTime> <TdsBuffersInformation> <TdsInputBufferError>0</TdsInputBufferError> <TdsOutputBufferError>0</TdsOutputBufferError> <TdsInputBufferBytes>120</TdsInputBufferBytes> </TdsBuffersInformation> <TdsDisconnectFlags> <PhysicalConnectionIsKilled>0</PhysicalConnectionIsKilled> <DisconnectDueToReadError>0</DisconnectDueToReadError> <NetworkErrorFoundInInputStream>0</NetworkErrorFoundInInputStream> <ErrorFoundBeforeLogin>0</ErrorFoundBeforeLogin> <SessionIsKilled>0</SessionIsKilled> <NormalDisconnect>0</NormalDisconnect> </TdsDisconnectFlags> </ConnectivityTraceRecord> <Stack> <frame id="0">0X000000000174C34B</frame> <frame id="1">0X0000000001748FDD</frame> <frame id="2">0X0000000002461001</frame> <frame id="3">0X0000000000C47E98</frame> <frame id="4">0X00000000008015AD</frame> <frame id="5">0X0000000000801492</frame> <frame id="6">0X00000000003CBBD8</frame> <frame id="7">0X00000000003CB8BA</frame> <frame id="8">0X00000000003CB6FF</frame> <frame id="9">0X00000000008E8FB6</frame> <frame id="10">0X00000000008E9175</frame> <frame id="11">0X00000000008E9839</frame> <frame id="12">0X00000000008E9502</frame> <frame id="13">0X0000000074E437D7</frame> <frame id="14">0X0000000074E43894</frame> <frame id="15">0X00000000775A652D</frame> Somehow all the errors show error number 18452 whereas I never found this error in my SQL logs where I see only 18456. I am just stuck on a dead end because this connection issue appears intermittently. Sorry for a long question but I hope if you read this, you can make out that I tried a lot at my end before giving up.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • How do I create a foreign key in SQL Server?

    - by mmattax
    I have never "hand coded" creation code for SQL Server and foreign key deceleration is seemingly different from SQL Server and Postgres...here is my sql so far: drop table exams; drop table question_bank; drop table anwser_bank; create table exams ( exam_id uniqueidentifier primary key, exam_name varchar(50), ); create table question_bank ( question_id uniqueidentifier primary key, question_exam_id uniqueidentifier not null, question_text varchar(1024) not null, question_point_value decimal, constraint question_exam_id foreign key references exams(exam_id) ); create table anwser_bank ( anwser_id uniqueidentifier primary key, anwser_question_id uniqueidentifier, anwser_text varchar(1024), anwser_is_correct bit ); when I run the query I get this error: Msg 8139, Level 16, State 0, Line 9 Number of referencing columns in foreign key differs from number of referenced columns, table 'question_bank'. Can you spot the error? thanks.

    Read the article

  • Free eBook "Troubleshooting SQL Server: A Guide for the Accidental DBA"

    - by TATWORTH
    "SQL Server-related performance problems come up regularly and diagnosing and solving them can be difficult and time consuming. Read SQL Server MVP Jonathan Kehayias’ Troubleshooting SQL Server: A Guide for the Accidental DBA for descriptions of the most common issues and practical solutions to fix them quickly and accurately." Please go to http://www.red-gate.com/products/dba/sql-monitor/entrypage/tame-unruly-sql-servers-ebook RedGate produce some superb tools for SQL Server. Jonathan's book is excellent - I commend it to all SQL DBA and developers.

    Read the article

  • What should the SQL keyword "ISABOUT" [deprecated?] be replaced with?

    - by Atomiton
    In MS SQL Full-text search, I'm using ISABOUT in my queries. For example, this should return the top 10 ProductIDs (PK) with a RANK Field in the ProductDetails Table SELECT * FROM CONTAINSTABLE( ProductDetails, *, ISABOUT("Nikon" WEIGHT (1.0), "Cameras" Weight(0.9)), 10 ) However, according to the SQL Documentation ISABOUT is deprecated. So, I have two questions: What is ISABOUT being replaced with? DO I even NEED any extra SQL Command there? ( IOW, would just putting the search phrase 'Nikon Cameras' be better? ) What I was originally trying to accomplish here was to weight the first word the highest, then the second word lower, and keep descending to 0.5 where I would just rank the remaining words at 0.5. My logic ( and perhaps it's flawed ) was that people's most relevant search words usually happen near the beginning of a phrase ( in English ). Am I going about this the wrong way? Is there a better way? Am I asking too many questions? (^_^) Thanks all for your time...

    Read the article

  • Possible to view T-SQL syntax of a stored proc-based SqlCommand?

    - by mconnley
    Hello! I was wondering if anybody knows of a way to retrieve the actual T-SQL that is to be executed by a SqlCommand object (with a CommandType of StoredProcedure) before it executes... My scenario involves optionally saving DB operations to a file or MSMQ before the command is actually executed. My assumption is that if you create a SqlCommand like the following: Using oCommand As New SqlCommand("sp_Foo") oCommand.CommandType = CommandType.StoredProcedure oCommand.Parameters.Add(New SqlParameter("@Param1", "value1")) oCommand.ExecuteNonQuery() End Using It winds up executing some T-SQL like: EXEC sp_Foo @Param1 = 'value1' Is that assumption correct? If so, is it possible to retrieve that actual T-SQL somehow? My goal here is to get the parsing, etc. benefits of using the SqlCommand class since I'm going to be using it anyway. Is this possible? Am I going about this the wrong way? Thanks in advance for any input!

    Read the article

  • Tales of a corrupt SQL log

    - by guybarrette
    Warning: I’m a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so it’s just a mater of restoring a good copy of the database and log files.  Well, I found out that’s not exactly true.  You see, for this database, I have daily file backups and these are not database backups created by SQL Server. So I restored a set of files from a couple of days ago, stopped the SQL service, copied the files over the bad ones, restarted the service only to find out that SQL doesn’t like when you do that.  It suspects something fishy and marks the database as suspect.  A database marked as suspect can’t be accessed at all.  So now what? I searched throughout the tubes of the InterWeb and found that you can restore from a corrupted log file by creating a new database with the same name as the defective one, then copy the restored database file (the one with data) over the newly created one.  Sweet!  But you still end up with SQL marking the database as suspect but at least, the newly created log is OK.  Well not true, it’s not corrupted but the lack of data makes it not OK for SQL so you need to rebuild the log.  How can you do that when SQL blocks any action the database?  First, you need to change the database status from suspect to emergency.  Then you need to set the database for single access only.  After that, you need to repair the log with DBCC and do the DBA dance.  If you dance long enough, SQL should repair the log file.  Now you need to set the access back to multi user.  Here’s the T-SQL script: use master GO EXEC sp_resetstatus 'MyDatabase' ALTER DATABASE MyDatabase SET EMERGENCY Alter database MyDatabase set Single_User DBCC checkdb('MyDatabase') ALTER DATABASE MyDatabase SET SINGLE_USER WITH ROLLBACK IMMEDIATE DBCC CheckDB ('MyDatabase', REPAIR_ALLOW_DATA_LOSS) ALTER DATABASE MyDatabase SET MULTI_USER So I guess that I would have been a lot easier to restore a SQL backup.  I can’t really say but the InterWeb seems to say so.  Anyway, lessons learned: Vive la différence: File backups are different then SQL backups. Don’t touch me: SQL doesn’t like when you restore a file over a corrupted one. The more the merrier: You should do both SQL and file backups. WTF?: The InterWeb provides you with dozens of way to deal with the problem but many are SQL 2000 or SQL 2005 only, many are confusing and many are written in strange dialects only DBAs understand. var addthis_pub="guybarrette";

    Read the article

  • How to figure the read/write ratio in Sql Server?

    - by Bill Paetzke
    How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of? Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly. Why do I care? I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users. One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read uncommitted snapshot) like Jeff Atwood did with StackOverflow.

    Read the article

  • How does SQL Server treat statements inside stored procedures with respect to transactions?

    - by Sleepless
    Hi All! Say I have a stored procedure consisting of several seperate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic. How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure? Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio? Thanks!

    Read the article

  • Entity Framework VS LINQ to SQL VS ADO.NET with stored procedures?

    - by BritishDeveloper
    How would you rate each of them in terms of: Performance Speed of development Neat, intuitive, maintainable code Flexibility Overall I like my SQL and so have always been a die-hard fan of ADO.NET and stored procedures but I recently had a play with Linq to SQL and was blown away by how quickly I was writing out my DataAccess layer and have decided to spend some time really understanding either Linq to SQL or EF... or neither? I just want to check, that there isn't a great flaw in any of these technologies that would render my research time useless. E.g. performance is terrible, it's cool for simple apps but can only take you so far

    Read the article

  • How can I save the schema of a SQL Database to a file?

    - by Eric
    I'm writing a software application in C#.Net that connects to a SQL Server database. My C# project is under SVN version control, but I'd like to include my database schema in the SVN repository as well. An answer to a previous question of mine suggested storing the scripts to generate the database in version control. Is there a way to automatically generate these scripts from an existing database? I'm very new to SQL Server, but I noticed in management studio that the SQL commands to create a table can be generated automatically by right clicking on the table and clicking "Script Table As". Is there an equivalent command that would work with the entire database?

    Read the article

  • How do I get a count of events each day with SQL?

    - by upl8
    I have a table that looks like this: Timestamp Event User ================ ===== ===== 1/1/2010 1:00 PM 100 John 1/1/2010 1:00 PM 103 Mark 1/2/2010 2:00 PM 100 John 1/2/2010 2:05 PM 100 Bill 1/2/2010 2:10 PM 103 Frank I want to write a query that shows the events for each day and a count for those events. Something like: Date Event EventCount ======== ===== ========== 1/1/2010 100 1 1/1/2010 103 1 1/2/2010 100 2 1/2/2010 103 1 The database is SQL Server Compact, so it doesn't support all the features of the full SQL Server. The query I have written so far is SELECT DATEADD(dd, DATEDIFF(dd, 0, Timestamp), 0) as Date, Event, Count(Event) as EventCount FROM Log GROUP BY Timestamp, Event This almost works, but EventCount is always 1. How can I get SQL Server to return the correct counts? All fields are mandatory.

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • Is there a combination of "LIKE" and "IN" in SQL?

    - by Techpriester
    Hi folks. In SQL I (sadly) often have to use "LIKE" conditions due to databases that violate nearly every rule of normalization. I can't change that right now. But that's irrelevant to the question. Further, I often use conditions like WHERE something in (1,1,2,3,5,8,13,21) for better readability and flexibility of my SQL statements. Is there any possible way to combine these two things without writing complicated sub-selects? I want something as easy as WHERE something LIKE ('bla%', '%foo%', 'batz%') instead of WHERE something LIKE 'bla%' OR something LIKE '%foo%' OR something LIKE 'batz%' I'm working with MS SQl Server and Oracle here but I'm interested if this is possible in any RDBMS at all.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >