Search Results

Search found 37012 results on 1481 pages for 'sql query'.

Page 314/1481 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • SQL Azure table size

    - by user224564
    In mssql2005 when i want to get size of table in MBs, i use EXEC sp_spaceused 'table'. Is there any way to get space used by particular table in SQL Azure using some query or API?

    Read the article

  • LINQ to SQL SOUNDEX - possible?

    - by Steve Hayes
    Hello, I have done a little bit of research on this and looked through a few articles both here on StackOverflow as well as some blog posts, but haven't found an exact answer. I also read that it is possible to do it using the 4.0 framework, but have yet to find any supporting evidence. So my question, is it possible to perform SOUNDEX via a LINQ to SQL Query?

    Read the article

  • NullReferenceException when calling InsertOnSubmit in Linq to Sql.

    - by Charlie
    I'm trying to insert a new object into my database using LINQ to SQL but get a NullReferenceException when I call InsertOnSubmit() in the code snippet below. I'm passing in a derived class called FileUploadAudit, and all properties on the object are set. public void Save(Audit audit) { try { using (ULNDataClassesDataContext dataContext = this.Connection.GetContext()) { if (audit.AuditID > 0) { throw new RepositoryException(RepositoryExceptionCode.EntityAlreadyExists, string.Format("An audit entry with ID {0} already exists and cannot be updated.", audit.AuditID)); } dataContext.Audits.InsertOnSubmit(audit); dataContext.SubmitChanges(); } } catch (Exception ex) { if (ObjectFactory.GetInstance<IExceptionHandler>().HandleException(ex)) { throw; } } } Here's the stack trace: at System.Data.Linq.Table`1.InsertOnSubmit(TEntity entity) at XXXX.XXXX.Repository.AuditRepository.Save(Audit audit) C:\XXXX\AuditRepository.cs:line 25" I've added to the Audit class like this: public partial class Audit { public Audit(string message, ULNComponent component) : this() { this.Message = message; this.DateTimeRecorded = DateTime.Now; this.SetComponent(component); this.ServerName = Environment.MachineName; } public bool IsError { get; set; } public void SetComponent(ULNComponent component) { this.Component = Enum.GetName(typeof(ULNComponent), component); } } And the derived FileUploadAudit looks like this: public class FileUploadAudit : Audit { public FileUploadAudit(string message, ULNComponent component, Guid fileGuid, string originalFilename, string physicalFilename, HttpPostedFileBase postedFile) : base(message, component) { this.FileGuid = fileGuid; this.OriginalFilename = originalFilename; this.PhysicalFileName = physicalFilename; this.PostedFile = postedFile; this.ValidationErrors = new List<string>(); } public Guid FileGuid { get; set; } public string OriginalFilename { get; set; } public string PhysicalFileName { get; set; } public HttpPostedFileBase PostedFile { get; set; } public IList<string> ValidationErrors { get; set; } } Any ideas what the problem is? The closest question I could find to mine is here but my partial Audit class is calling the parameterless constructor in the generated code, and I still get the problem. UPDATE: This problem only occurs when I pass in the derived FileUploadAudit class, the Audit class works fine. The Audit class is generated as a linq to sql class and there are no Properties mapped to database fields in the derived class.

    Read the article

  • OrderBy and Distinct using LINQ-to-Entities

    - by BlueRaja
    Here is my LINQ query: (from o in entities.MyTable orderby o.MyColumn select o.MyColumn).Distinct(); Here is the result: {"a", "c", "b", "d"} Here is the generated SQL: SELECT [Distinct1].[MyColumn] AS [MyColumn] FROM ( SELECT DISTINCT [Extent1].[MyColumn] AS [MyColumn] FROM [dbo].[MyTable] AS [Extent1] ) AS [Distinct1] Is this a bug? Where's my ordering, damnit?

    Read the article

  • Linq 2 Sql DateTime format to string yyyy-MM-dd

    - by Bogdan Maxim
    Basically, i need the equivalent of T-SQL CONVERT(NVARCHAR(10), datevalue, 126) I've tried: from t in ctx.table select t.Date.ToString("yyyy-MM-dd") but it throws not supported exception from t in ctx.table select "" + t.Date.Year + "-" + t.Date.Month + "-" + t.Date.Day but i don't think it's an usable solution, because i might need to be able to change the format. The only option I see is to use Convert.ToString(t.Date, FormatProvider), but i need a format provider, and I'm not sure it works either.

    Read the article

  • Unit of Measurement for Duration Column in Sql Profiler

    - by Mubashar Ahmad
    What is the Unit of Duration column in SQL Profiler? i thought it is milliseconds but in following profiler row i found it contradicting with start and end time spid=163 duration=11310646 starttime=2010-04-06 17:45:24.480 endtime=2010-04-06 17:45:35.790 reads=152 writes=2 cpu=16 eventclass=12 textdata= DELETE FROM dbo.[Icon] WHERE Id = 20087

    Read the article

  • Prevent recursive CTE visiting nodes multiple times

    - by bacar
    Consider the following simple DAG: 1->2->3->4 And a table, #bar, describing this (I'm using SQL Server 2005): parent_id child_id 1 2 2 3 3 4 //... other edges, not connected to the subgraph above Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1-2 and 3-4. I want to use these to find the rest of my graph. I can write a recursive CTE as follows (I'm using terminology from MSDN): with foo(parent_id,child_id) as ( // anchor member that happens to select first and last edges: select parent_id,child_id from #bar where parent_id in (1,3) union all // recursive member: select #bar.* from #bar join foo on #bar.parent_id = foo.child_id ) select parent_id,child_id from foo However, this results in edge 3-4 being selected twice: parent_id child_id 1 2 3 4 2 3 3 4 // 2nd appearance! How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference all data that has been retrieved by the recursive CTE so far (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by the last iteration of the recursive member only. This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion? Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied after all the (repeated) recursion is done, so I don't think this is an ideal solution. Edit - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only where foo.child_id not in (1,3). That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1-4 and 4-5 to the above set. Edge 4-5 will be captured twice, even with the suggested predicate. :(

    Read the article

  • C# export to excel from sql server

    - by Manish Gupta
    In my C# windows application, I am exporting sql server data to excel on remote drive. But it is too slow. However, if I export data to excel in the local drive, it is fast. How can I increase the time if I want to export data to remote drive? Thanks in advance...

    Read the article

  • T-SQL: @@IDENTITY, SCOPE_IDENTITY(), OUTPUT and other methods of retrieving last identity

    - by Terrapin
    I have seen various methods used when retrieving the value of a primary key identity field after insert. declare @t table ( id int identity primary key, somecol datetime default getdate() ) insert into @t default values select SCOPE_IDENTITY() --returns 1 select @@IDENTITY --returns 1 Returning a table of identities following insert: Create Table #Testing ( id int identity, somedate datetime default getdate() ) insert into #Testing output inserted.* default values What method is proper or better? Is the OUTPUT method scope-safe? The second code snippet was borrowed from SQL in the Wild

    Read the article

  • Concatenating Date Values - SQL Injection

    - by Kyle Rozendo
    Hi All, We currently receive parameters of values as VARCHAR's, and then build a date from them. I am wanting to confirm that the method would stop the possibility of SQL injection from this statement: select CONVERT(datetime, '2010' + '-' + '02' + '-' + '21' + ' ' + '15:11:38.990') Another note is that the actual parameters being passed through to the stored proc are length bound at (4, 2, 2, 10, 12) in correspondence to the above. Thanks a ton, Kyle

    Read the article

  • Linq to SQL - Invalid attempt to call FieldCount when reader is closed

    - by Justin
    Hey, Has anyone seen this error before when making a Linq to SQL call? Invalid attempt to call FieldCount when reader is closed Here's the code: public static MM GetDeviceConfiguration(long ipAddress) { string sprocName = ConfigurationBL.ReadSproc("ReadDeviceConfiguration"); return db.ExecuteQuery<MM>("exec {0} @IPAddressNumber = {1}", sprocName, ipAddress).FirstOrDefault(); } Thanks, Justin

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >