Search Results

Search found 85825 results on 3433 pages for 'sql data services'.

Page 113/3433 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Why does a user have to enter "Profile" data to enter data into other tables?

    - by Greg McNulty
    EDIT It appears the user has to enter some data for his profile, otherwise I get this error below. I guess if there is no profile data, the user can not continue to enter data in other tables by default? I do not want to make entering user profile data a requirement to use the rest of the sites functionality, how can I get around this? Currently I have been testing everything with the same user and everything has been working fine. However, when I created a new user for the very first time and tried to enter data into my custom table, I get the following error. The INSERT statement conflicted with the FOREIGN KEY constraint "FK_UserData_aspnet_Profile". The conflict occurred in database "C:\ISTATE\APP_DATA\ASPNETDB.MDF", table "dbo.aspnet_Profile", column 'UserId'. The statement has been terminated. Not sure why I am getting this error. I have the user controls set up in ASP.NET 3.5 however all I am using is my own table or at least that I am aware of. I have a custom UserData table that includes the columns: id, UserProfileID, CL, LL, SL, DateTime (id is the auto incremented int) The intent is that all users will add their data in this table and as I mentioned above it has been working fine for my original first user I created. However, when i created a new user I am getting this problem. Here is the code that updates the database. protected void Button1_Click(object sender, EventArgs e) { //connect to database MySqlConnection database = new MySqlConnection(); database.CreateConn(); //create command object Command = new SqlCommand(queryString, database.Connection); //add parameters. used to prevent sql injection Command.Parameters.Add("@UID", SqlDbType.UniqueIdentifier); Command.Parameters["@UID"].Value = Membership.GetUser().ProviderUserKey; Command.Parameters.Add("@CL", SqlDbType.Int); Command.Parameters["@CL"].Value = InCL.Text; Command.Parameters.Add("@LL", SqlDbType.Int); Command.Parameters["@LL"].Value = InLL.Text; Command.Parameters.Add("@SL", SqlDbType.Int); Command.Parameters["@SL"].Value = InSL.Text; Command.ExecuteNonQuery(); } Source Error: Line 84: Command.ExecuteNonQuery();

    Read the article

  • How do you handle the fetchxml result data?

    - by Luke Baulch
    I have avoided working with fetchxml as I have been unsure the best way to handle the result data after calling crmService.Fetch(fetchXml). In a couple of situations, I have used an XDocument with LINQ to retrieve the data from this data structure, such as: XDocument resultset = XDocument.Parse(_service.Fetch(fetchXml)); if (resultset.Root == null || !resultset.Root.Elements("result").Any()) { return; } foreach (var displayItem in resultset.Root.Elements("result").Select(item => item.Element(displayAttributeName)).Distinct()) { if (displayItem!= null && displayItem.Value != null) { dropDownList.Items.Add(displayItem.Value); } } What is the best way to handle fetchxml result data, so that it can be easily used. Applications such as passing these records into an ASP.NET datagrid would be quite useful.

    Read the article

  • Debugging SQL Server Slowness: Same Database, Different Servers

    - by Craig Walker
    For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue. Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one. As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing. What things should I be looking at? What tools can I use to investigate why this is happening?

    Read the article

  • POX return data from WCF Data Services

    - by keithwarren7
    I am using WCF Data Services (netfx4) to provide data sourced from SQL via EF, the standard OData mechanism is fine and JSON works as well but I need a third option for generic POX (plain old xml). I have yet to come across a simple strategy or switch that allows me to control this but I am sure one must exist or a workaround method might be available. Any ideas? Ideally I would like to be able to use something like the JSONP option wherein I append 'format=JSON' to the URL, in this case 'format=pox' or 'POX=true' or something of that nature.

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

  • Generate Entity Data Model from Data Contract

    - by CSmooth.net
    I would like to find a fast way to convert a Data Contract to a Entity Data Model. Consider the following Data Contract: [DataContract] class PigeonHouse { [DataMember] public string housename; [DataMember] public List<Pigeon> pigeons; } [DataContract] class Pigeon { [DataMember] public string name; [DataMember] public int numberOfWings; [DataMember] public int age; } Is there an easy way to create an ADO.NET Entity Data Model from this code?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Dynamic SQL To Dynamic LINQ in VB.NET with MS SQL Server 2008

    - by user337501
    I dread asking this question, because with what I've read so far I understand im gonna have to cram a lotta new things into my head. In spite of all the similiar questions(and the wide variety of answers) I thought I'd ask as nothing I've read tailors to what I need specifically enough. I need to represent the following query using LINQ: DECLARE @PurchasedInventoryItemID Int = 2 DECLARE @PurchasedInventorySectionID Int = 0 DECLARE @PurchasedInventoryItem_PurchasingCategoryID Int = 3 DECLARE @PurchasedInventorySection_PurchasingCategoryID Int = 0 DECLARE @IsActive Bit = 1 DECLARE @PropertyID Int = 2 DECLARE @PropertyValue nvarchar(1000) = 'Granny Smith' --Property1, Property2, Property3 ... SELECT O.PurchasedInventoryObjectID, O.PurchasedInventoryObjectName, O.PurchasedInventoryConjunctionID, O.Summary, O.Count, O.PropertyCount, O.IsActive FROM tblPurchasedInventoryObject As O INNER JOIN tblPurchasedInventoryConjunction As C ON C.PurchasedInventoryConjunctionID = O.PurchasedInventoryConjunctionID INNER JOIN tblPurchasedInventoryItem As I ON I.PurchasedInventoryItemID = C.PurchasedInventoryItemID INNER JOIN tblPurchasedInventorySection As S ON S.PurchasedInventorySectionID = C.PurchasedInventorySectionID INNER JOIN tblPurchasedInventoryPropertyMap as M ON M.PurchasedInventoryObjectID = O.PurchasedInventoryObjectID INNER JOIN tblPropertyValue As V ON V.PropertyValueID = M.PropertyValueID WHERE I.PurchasedInventoryItemID = @PurchasedInventoryItemID AND S.PurchasedInventorySectionID = @PurchasedInventorySectionID AND I.PurchasingCategoryID = @PurchasedInventoryItem_PurchasingCategoryID AND S.PurchasingCategoryID = @PurchasedInventorySection_PurchasingCategoryID AND O.IsActive = @IsActive AND V.PropertyID = @PropertyID AND V.Value = @PropertyValue Now, I know that a query in .NET doesnt look like this, this is my test in the SQL Design Studio. Naturally VB.NET variables will be used in place of the SQL local variables. My problem is this: All of the conditions after "WHERE" are optional. In that a query might be made that uses one, some, all, or none of the conditions. V.PropertyID and V.Value can also appear any number of times. In VB.NET I can make this query easy enough by simply concatenating strings, and using a loop to append the "V.PropertyID/V.Value" conditions. I can also make a Stored Procedure in MS SQL, which is easy enough. However, I want to accomplish this using LINQ. If anyone could direct me, I would be most appreciative.

    Read the article

  • Embedded SQL in OO languages like Java

    - by Steve De Caux
    One of the things that annoys me working with SQL in OO languages is having to define SQL statements in strings. When I used to work on IBM mainframes, the languages used an SQL preprocessor to parse SQL statements out of the native code, so the statements could be written in cleartext SQL without the obfuscation of strings, for instance in Cobol there is a EXEC SQL .... END-EXEC syntax construct that allows pure SQL statements to be embedded in the Cobol code. <pure cobol code, including assignment of value to local variable HOSTVARIABLE> EXEC SQL SELECT COL_A, COL_B, COL_C INTO :COLA, :COLB, :COLC FROM TAB_A WHERE COL_D = :HOSTVARIABLE END_EXEC <more cobol code, variables COLA, COLB, COLC have been set> ...this makes the SQL statement really easy to read & check for errors. Between the EXEC SQL .... END-EXEC tokens there are no constraints on indentation, linebreaking etc., so you can format the SQL statement according to taste. Note that this example is for a single-row select, when a multiple-row resultset is expected, the coding is different (but still v. easy to read). So, taking Java as an example What made the "old COBOL" approach undesirable ? Not only SQL, but system calls could be made much more readable with that approach. Let's call it the embedded foreign language preprocessor approach. Would an embedded foreign language preprocessor for SQL be useful to implement ? Would you see a benefit in being able to write native SQL statements inside java code ? Edit I'm really asking if you think SQL in OO languages is a throwback, and if not then what could be done to make it better.

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • Why is using OPENQUERY on a local server bad?

    - by Ziplin
    I'm writing a script that is supposed to run around a bunch of servers and select a bunch of data out of them, including the local server. The SQL needed to SELECT the data I need is pretty complicated, so I'm writing sort of an ad-hoc view, and using an OPENQUERY statement to get the data, so ultimately I end up looping over a statement like this: exec('INSERT INTO tabl SELECT * FROM OPENQUERY(@Server, @AdHocView)') However, I've heard that using OPENQUERY on the local server is frowned upon. Could someone elaborate as to why?

    Read the article

  • Why use SQL database?

    - by martinthenext
    I'm not quite sure stackoverflow is a place for such a general question, but let's give it a try. Being exposed to the need of storing application data somewhere, I've always used MySQL or sqlite, just because it's always done like that. As it seems like the whole world is using these databases, most of all software products, frameworks, etc. It is rather hard for a beginning developer like me to ask a question - why? Ok, say we have some object-oriented logic in our application, and objects are related to each other somehow. We need to map this logic to the storage logic, so we need relations between database objects too. This leads us to using relational database and I'm ok with that - to put it simple, our database rows sometimes will need to have references to other tables' rows. But why do use SQL language for interaction with such a database? SQL query is a text message. I can understand this is cool for actually understanding what it does, but isn't it silly to use text table and column names for a part of application that no one ever seen after deploynment? If you had to write a data storage from scratch, you would have never used this kind of solution. Personally, I would have used some 'compiled db query' bytecode, that would be assembled once inside a client application and passed to the database. And it surely would name tables and colons by id numbers, not ascii-strings. In the case of changes in table structure those byte queries could be recompiled according to new db schema, stored in XML or something like that. What are the problems of my idea? Is there any reason for me not to write it myself and to use SQL database instead?

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; Someone told me to: Connect to localhost with SQL Server Management Studio Express, and remove/detach the existing PatientMonitoringDatabase database. Whether it's a persistent database or only active within a running application, you can't have 2 databases with the same name at the same time attached to a SQL Server instance. So I did that, and now it gives me: Directory lookup for the file "C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf" failed with the operating system error 5(Access is denied.). Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase' I checked the files' properties, and I have allow for everyone. Does anyone know what's going on here?

    Read the article

  • Validating method arguments with Data Annotation attributes

    - by schemer
    The "Silverlight Business Application" template bundled with VS2010 / Silverlight 4 uses DataAnnotations on method arguments in its domain service class, which are invoked automagically: public CreateUserStatus CreateUser(RegistrationData user, [Required(ErrorMessageResourceName = "ValidationErrorRequiredField", ErrorMessageResourceType = typeof(ValidationErrorResources))] [RegularExpression("^.*[^a-zA-Z0-9].*$", ErrorMessageResourceName = "ValidationErrorBadPasswordStrength", ErrorMessageResourceType = typeof(ValidationErrorResources))] [StringLength(50, MinimumLength = 7, ErrorMessageResourceName = "ValidationErrorBadPasswordLength", ErrorMessageResourceType = typeof(ValidationErrorResources))] string password) { /* do something */ } If I need to implement this in my POCO class methods, how do I get the framework to invoke the validations OR how do I invoke the validation on all the arguments imperatively (using Validator or otherwise?).

    Read the article

  • Detecting a Lightweight Core Data Migration

    - by hadronzoo
    I'm using Core Data's automatic lightweight migration successfully. However, when a particular entity gets created during a migration, I'd like to populate it with some data. Of course I could check if the entity is empty every time the application starts, but this seems inefficient when Core Data has a migration framework. Is it possible to detect when a lightweight migration occurs (possibly using KVO or notifications), or does this require implementing standard migrations? I've tried using the NSPersistentStoreCoordinatorStoresDidChangeNotification, but it doesn't fire when migrations occur.

    Read the article

  • T-SQL Where statement - OR

    - by Zyphrax
    Sorry for the vague title but I really didn't know what title to give to this problem: I have the following result set: ID Data Culture 1 A nl-NL 2 B nl-NL 3 C nl-NL 4 A en-GB 5 B en-GB 6 A en 7 B en 8 C en 9 D en 10 A nl I would like to mimic the ASP.Net way of resource selection, user Culture (nl-NL) SELECT Data FROM Tbl WHERE Culture = 'nl-NL' Or when there are no results for the specific culture, try the parent Culture (nl) SELECT Data FROM Tbl WHERE Culture = 'nl' Or when there are no results for the parent culture, try the default Culture (en) SELECT Data FROM Tbl WHERE Culture = 'en' How can I combine these SELECT-statements in one T-SQL statement? I'm using LINQ so a LINQ expression would be even greater. The OR-statement won't work, because I don't want a mix of cultures. The ORDER BY-statement won't help, because it returns multiple records per culture.

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • SQL query recursion for a web-like structure

    - by MickeyD
    I have a table here, named "Foo". The data is set up something like this. ID TableReference DataId0 DataId1 DataId2 -- -------------- ------- ------- ------- 1 Prize 3 4 5 2 Prize 4 5 NULL 3 Cash 1 NULL NULL 4 Prize 8 NULL 12 5 Foo 2 3 NULL 6 Cash 8 1 10 7 Foo 5 1 2 Etc. The data is horribly set up, I know, but I didn't set it up that way. :) I'm only dealing with the after effect. I'm trying to come up with a way to essentially "flatten" the table; that is, to display all the data to a point where the table "Foo" does not reference itself. I'm trying to figure out a sql query that I can do to get there. Usually when I deal with recursion, I have (or can establish) parent IDs and set it up that way, but for this table there are seemingly multiple child and parent IDs creating a web-like structure instead of a hierarchy. So I'm at a loss where to even begin to write a sql query for something like this. Note: There is no infinite looping (where one Foo points to another Foo, which points back to the original Foo) from what I've found. Using t-sql. Thanks for any assistance, if at all possible.

    Read the article

  • SQL - Updating records based on most recent date

    - by Remnant
    I am having difficulty updating records within a database based on the most recent date and am looking for some guidance. By the way, I am new to SQL. As background, I have a windows forms application with SQL Express and am using ADO.NET to interact with the database. The application is designed to enable the user to track employee attendance on various courses that must be attended on a periodic basis (e.g. every 6 months, every year etc.). For example, they can pull back data to see the last time employees attended a given course and also update attendance dates if an employee has recently completed a course. I have three data tables: EmployeeDetailsTable - simple list of employees names, email address etc., each with unique ID CourseDetailsTable - simple list of courses, each with unique ID (e.g. 1, 2, 3 etc.) AttendanceRecordsTable - has 3 columns { EmployeeID, CourseID, AttendanceDate, Comments } For any given course, an employee will have an attendance history i.e. if the course needs to be attended each year then they will have one record for as many years as they have been at the company. What I want to be able to do is to update the 'Comments' field for a given employee and given course based on the most recent attendance date. What is the 'correct' SQL syntax for this? I have tried many things (like below) but cannot get it to work: UPDATE AttendanceRecordsTable SET Comments = @Comments WHERE AttendanceRecordsTable.EmployeeID = (SELECT EmployeeDetailsTable.EmployeeID FROM EmployeeDetailsTable WHERE (EmployeeDetailsTable.LastName =@ParameterLastName AND EmployeeDetailsTable.FirstName =@ParameterFirstName) AND AttendanceRecordsTable.CourseID = (SELECT CourseDetailsTable.CourseID FROM CourseDetailsTable WHERE CourseDetailsTable.CourseName =@CourseName)) GROUP BY MAX(AttendanceRecordsTable.LastDate) After much googling, I discovered that MAX is an aggregate function and so I need to use GROUP BY. I have also tried using the HAVING keyword but without success. Can anybody point me in the right direction? What is the 'conventional' syntax to update a database record based on the most recent date?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >