Search Results

Search found 113039 results on 4522 pages for 'database sql server'.

Page 159/4522 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Implementing database redundancy with sharded tables

    - by ensnare
    We're looking to implement load balancing by horizontally sharding our tables across a cluster of servers. What are some options to implement live redundancy should a server fail? Would it be effective to do (2) INSERTS instead of one ... one to the target shard, and another to a secondary shard which could be accessed should the primary shard not respond? Or is there a better way? Thanks.

    Read the article

  • Help to translate SQL query to Relational Algebra

    - by Mestika
    Hi everyone, I'm having some difficulties with translating some queries to Relational Algebra. I've a great book about Database Design and here is a chapter about Relational Algebra but I still seem to have some trouble creating the right one: Thoes queries I've most difficuelt with is these: SELECT COUNT( cs.student_id ) AS counter FROM course c, course_student cs WHERE c.id = cs.course_id AND c.course_name = 'Introduction to Database Design' SELECT COUNT( cs.student_id ) FROM Course c INNER JOIN course_student cs ON c.id = cs.course_id WHERE c.course_name = 'Introduction to Database Design' and SELECT COUNT( * ) FROM student JOIN grade ON student.f_name = "Andreas" AND student.l_name = "Pedersen" AND student.id = grade.student_id I know the notation can be a bit hard to paste into HTML forum, but maybe just use some common name or the Greek name. Thanks in advance Mestika

    Read the article

  • Migrating From SQL Server Server 7 To 2005, What should I get excited about?

    - by Jon P
    The company I work for has decided to join the 21st century and upgrade our main database cluster from SQL Server 7 to SQL Server 2005. As a web developer what new whiz-bang features of SQL Server 2005 should I get excited about or get to know? Currently I'm mainly writing CRUD style queries, pretty much exclusively using Stored Procdures for a mixed ASP.net and Classic ASP environment.

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • Dynamic SQL To Dynamic LINQ in VB.NET with MS SQL Server 2008

    - by user337501
    I dread asking this question, because with what I've read so far I understand im gonna have to cram a lotta new things into my head. In spite of all the similiar questions(and the wide variety of answers) I thought I'd ask as nothing I've read tailors to what I need specifically enough. I need to represent the following query using LINQ: DECLARE @PurchasedInventoryItemID Int = 2 DECLARE @PurchasedInventorySectionID Int = 0 DECLARE @PurchasedInventoryItem_PurchasingCategoryID Int = 3 DECLARE @PurchasedInventorySection_PurchasingCategoryID Int = 0 DECLARE @IsActive Bit = 1 DECLARE @PropertyID Int = 2 DECLARE @PropertyValue nvarchar(1000) = 'Granny Smith' --Property1, Property2, Property3 ... SELECT O.PurchasedInventoryObjectID, O.PurchasedInventoryObjectName, O.PurchasedInventoryConjunctionID, O.Summary, O.Count, O.PropertyCount, O.IsActive FROM tblPurchasedInventoryObject As O INNER JOIN tblPurchasedInventoryConjunction As C ON C.PurchasedInventoryConjunctionID = O.PurchasedInventoryConjunctionID INNER JOIN tblPurchasedInventoryItem As I ON I.PurchasedInventoryItemID = C.PurchasedInventoryItemID INNER JOIN tblPurchasedInventorySection As S ON S.PurchasedInventorySectionID = C.PurchasedInventorySectionID INNER JOIN tblPurchasedInventoryPropertyMap as M ON M.PurchasedInventoryObjectID = O.PurchasedInventoryObjectID INNER JOIN tblPropertyValue As V ON V.PropertyValueID = M.PropertyValueID WHERE I.PurchasedInventoryItemID = @PurchasedInventoryItemID AND S.PurchasedInventorySectionID = @PurchasedInventorySectionID AND I.PurchasingCategoryID = @PurchasedInventoryItem_PurchasingCategoryID AND S.PurchasingCategoryID = @PurchasedInventorySection_PurchasingCategoryID AND O.IsActive = @IsActive AND V.PropertyID = @PropertyID AND V.Value = @PropertyValue Now, I know that a query in .NET doesnt look like this, this is my test in the SQL Design Studio. Naturally VB.NET variables will be used in place of the SQL local variables. My problem is this: All of the conditions after "WHERE" are optional. In that a query might be made that uses one, some, all, or none of the conditions. V.PropertyID and V.Value can also appear any number of times. In VB.NET I can make this query easy enough by simply concatenating strings, and using a loop to append the "V.PropertyID/V.Value" conditions. I can also make a Stored Procedure in MS SQL, which is easy enough. However, I want to accomplish this using LINQ. If anyone could direct me, I would be most appreciative.

    Read the article

  • Consolidate data from many different databases into one with minimum latency

    - by NTDLS
    I have 12 databases totaling roughly 1.0TB, each on a different physical server running SQL 2005 Enterprise - all with the same exact schema. I need to offload this data into a separate single database so that we can use for other purposes (reporting, web services, ect) with a maximum of 1 hour latency. It should also be noted that these servers are all in the same rack, connected by gigabit connections and that the inserts to the databases are minimal (Avg. 2500 records/hour). The current method is very flakey: The data is currently being replicated (SQL Server Transactional Replication) from each of the 12 servers to a database on another server (yes, 12 different employee tables from 12 different servers into a single employee table on a different server). Every table has a primary key and the rows are unique across all tables (there is a FacilityID in each table). What are my options, these has to be a simple way to do this.

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • SQL Azure and VS 2010B2 or SSMSE 2008

    - by Vulgrin
    Ok, I see that people have asked this question before, but I'm seeing some conflicting statements. Can I, or can I not, connect directly to my SQL Azure database from SSMSE 2008? I see posts from before November that the SSMS 2008 RC would be able to connect directly - so I don't understand why the newest SSMSE cannot connect. Is it just a problem with the Express version of SSMS? Where can I find the "non-Express" version, if there is one? I can connect to the database via the cancel and connect method - however, you don't get object explorer that way. I see that there are add-ons for VS.Net to allow you to explore the database but I wanted to do it with the base apps if possible. Thanks

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • Embedded SQL in OO languages like Java

    - by Steve De Caux
    One of the things that annoys me working with SQL in OO languages is having to define SQL statements in strings. When I used to work on IBM mainframes, the languages used an SQL preprocessor to parse SQL statements out of the native code, so the statements could be written in cleartext SQL without the obfuscation of strings, for instance in Cobol there is a EXEC SQL .... END-EXEC syntax construct that allows pure SQL statements to be embedded in the Cobol code. <pure cobol code, including assignment of value to local variable HOSTVARIABLE> EXEC SQL SELECT COL_A, COL_B, COL_C INTO :COLA, :COLB, :COLC FROM TAB_A WHERE COL_D = :HOSTVARIABLE END_EXEC <more cobol code, variables COLA, COLB, COLC have been set> ...this makes the SQL statement really easy to read & check for errors. Between the EXEC SQL .... END-EXEC tokens there are no constraints on indentation, linebreaking etc., so you can format the SQL statement according to taste. Note that this example is for a single-row select, when a multiple-row resultset is expected, the coding is different (but still v. easy to read). So, taking Java as an example What made the "old COBOL" approach undesirable ? Not only SQL, but system calls could be made much more readable with that approach. Let's call it the embedded foreign language preprocessor approach. Would an embedded foreign language preprocessor for SQL be useful to implement ? Would you see a benefit in being able to write native SQL statements inside java code ? Edit I'm really asking if you think SQL in OO languages is a throwback, and if not then what could be done to make it better.

    Read the article

  • T-Sql SPROC - Parse C# Datatable to XML in Database (SQL Server 2005)

    - by Goober
    Scenario I've got an application written in C# that needs to dump some data every minute to a database. Because its not me that has written the spec, I have been told to store the data as XML in the SQL Server database and NOT TO USE the "bulk upload" feature. Essentially I just wanted to have a single stored procedure that would take XML (that I would produce from my datatable) and insert it into the database.....and do this every minute. Current Situation I've heard about the use of "Sp_xml_preparedocument" but I'm struggling to understand most of the examples that I've seen (My XML is far narrower than my C Sharp ability). Question I would really appreciate someone either pointing me in the direction of a worthwhile tutorial or helping explain things. EDIT - Using (SQL Server 2005)

    Read the article

  • mySQL and general database normalization question

    - by Sinan
    I have question about normalization. Suppose I have an applications dealing with songs. First I thought about doing like this: Songs Table: id | song_title | album_id | publisher_id | artist_id Albums Table: id | album_title | etc... Publishers Table: id | publisher_name | etc... Artists Tale: id | artist_name | etc... Then as I think about normalization stuff. I thought I should get rid of "album_id, publisher_id, and artist_id in songs table and put them in intermediate tables like this. Table song_album: song_id, album_id Table song_publisher song_id, publisher_id Table song_artist song_id, artist_id Now I can't decide which is the better way. I'm not an expert on database design so If someone would point out the right direction. It would awesome. Are there any performance issues between two approaches? Thanks

    Read the article

  • Database design suggestions for a configurable product eshop

    - by solomongaby
    Hello, I am biulding an e-shop that will have configurable products. The configurable parts will need to have different prices and stocks from the main product. What database design would be best in this case? I started with something like this. Features id name Features Options id id_feature value Products id name price Products Features id id_product id_feature value ( save the value from the feature-options for ease in search ) configurable (yes, no) The problem is that now I am stuck on how to save the configurable product features. I was thinking of saving their value as a json. But that will make saving price modification for a certain option difficult. How would you go about this ? Thank you.

    Read the article

  • What open source database platform is most easily transferred from my personal machine into a window

    - by Tom
    I would like eventual interaction with MS Dynamics SL and/or MindTouch Core (running on WMware) for eventual intranet and/or internet display. I guess I am asking for front and back end recommendations for a database I am constructing, but since this is my first major project I would greatly appreciate any help and advice. I would also love an opportunity to learn a new language so the code base could be in any language. I do have a few more related questions for discussion; What is the viability of using Google hosting to provide the service to the public for free? Should I implement plone or another CMS if I have a large amount of output? Is there a structuring questionnaire or standards publication I could reference? Does UML diagramming provide additional options for portability? Thank you.

    Read the article

  • Restore Partioned database into multiple filegroups

    - by Renju
    does anyone have any query to restore partioned db that having multiple file groups,In the restore option in the SSME i need to edit manually all the path of the filegroups restore as option it little bit tedious as it having more than 150 filegroups eg:USE master GO -- First determine the number and names of the files in the backup. RESTORE FILELISTONLY FROM MyNwind_1 -- Restore the files for MyNwind. RESTORE DATABASE MyNwind FROM MyNwind_1 WITH NORECOVERY, MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' GO -- Apply the first transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log1 WITH NORECOVERY GO -- Apply the last transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log2 WITH RECOVERY GO Here i need to specify multiple MOVE command for all my filegroups,this is a tedious task when having more than 100s of filegroups MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' I need to move the files into the path i provided as a parameter.Please help. Regards Renju http://blog.renjucool.com

    Read the article

  • What is wrong with this database query?

    - by outsyncof
    I have the following tables in a database (i'll only list the important attributes): Person(ssn,countryofbirth) Parents(ssn,fatherbirthcountry) Employment(ssn, companyID) Company(companyID, name) My task is this: given fatherbirthcountry as input, output the names of companies where persons work whose countryofbirth match the fatherbirthcountry input. I pretend that the fatherbirthcountry is Mexico and do this: SELECT name FROM Company WHERE companyid = (SELECT companyid FROM Employment WHERE ssn = (SELECT ssn FROM Person WHERE countryofbirth = 'Mexico'); but it is giving me an error: >Scalar subquery is only allowed to return a single row. am I completely off track? Can anybody please help?

    Read the article

  • VB6 ADODB Fails with SQL Compact: Multipe-Step operation generated errors

    - by Belliez
    Hi, I am converting an old application to use SQL Compact database (it works ok with SQ Server 2005 and 2008) and using the following code gives an error when attempting to execute a simple select command: Private Const mSqlProvider As String = "Provider=Microsoft.SQLSERVER.CE.OLEDB.3.5;" Private Const mSqlHost As String = "Data Source=C:\database.sdf;" Private mCmd As ADODB.Command ' For executing SQL' Private mDbConnection As ADODB.Connection Private Sub Command1_Click() Dim DbConnectionString As String DbConnectionString = mSqlProvider & _ mSqlHost Set mDbConnection = New ADODB.Connection mDbConnection.CursorLocation = adUseClient Call mDbConnection.Open(DbConnectionString) If mDbConnection.State = adStateOpen Then Debug.Print (" Database is open") ' Initialise the command object' Set mCmd = New ADODB.Command mCmd.ActiveConnection = mDbConnection End If mCmd.CommandText = "select * from myTable" mCmd.CommandType = adCmdText mCmd.Execute ' FAILS HERE! ' End Sub I have referenced Microsoft ActiveX Data Access Object 6.0 Library in the project. The error I get is: Run-Time error -2147217887 (80040e21) Multipe-Step operation generated errors. Check each status value Just wondering if anyone has any suggestions? Thanks

    Read the article

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • Classic ASP Site Throwing Date Conversion Error After Moving Servers?

    - by leen3o
    I am moving an old store from a Win2003 IIS6 server to a Win2008 IIS7 server, moved everything across including database. The front end seems to work just fine, but when I login it has to do pull in data based on date ranges and now from no where I am getting this error? The conversion of a varchar data type to a datetime data type resulted in an out-of-range value Any ideas why this would do this on the new server and NOT on the old one? No code has changed and the DB is a backup of the one from the old server??

    Read the article

  • MySQL or SQL Server

    - by user203708
    I'm creating an application that I want to run on either MySQL or SQL Server (not both at the same time) I've created two PHP classes DatabaseMySQL and DatabaseSQLSVR and I'd like my application to know which database class to use based on a constant set up at install. define(DB_TYPE, "mysql"); // or "sqlsrv" I'm trying to think of the best way to handle this. My thought is to do an "if else" wherever I instantiate the database: $db = (DB_TYPE == "mysql") ? new DatabaseMySQL : new DatabaseSQLSVR; I know there has to be a better way of doing this though. Suppose I want to add a third database type later; I'll have to go and redo all my code. Yuk!! Any help would be much appreciated. Thank

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >