Search Results

Search found 49224 results on 1969 pages for 'database mirroring sql se'.

Page 215/1969 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • Storing n-grams in database in < n number of tables.

    - by kurige
    If I was writing a piece of software that attempted to predict what word a user was going to type next using the two previous words the user had typed, I would create two tables. Like so: == 1-gram table == Token | NextWord | Frequency ------+----------+----------- "I" | "like" | 15 "I" | "hate" | 20 == 2-gram table == Token | NextWord | Frequency ---------+------------+----------- "I like" | "apples" | 8 "I like" | "tomatoes" | 12 "I hate" | "tomatoes" | 20 "I hate" | "apples" | 2 Following this example implimentation the user types "I" and the software, using the above database, predicts that the next word the user is going to type is "hate". If the user does type "hate" then the software will then predict that the next word the user is going to type is "tomatoes". However, this implimentation would require a table for each additional n-gram that I choose to take into account. If I decided that I wanted to take the 5 or 6 preceding words into account when predicting the next word, then I would need 5-6 tables, and an exponentially increase in space per n-gram. What would be the best way to represent this in only one or two tables, that has no upper-limit on the number of n-grams I can support?

    Read the article

  • hosting site on one web server and accessing a database on another

    - by tombull89
    I'm fairly sure this is the right place for this question, but if not please move it to the the right site. I have a number of sites on a 1and1 package (yeah, I know...) and I also have a subdomian that belongs to a college tutor. I have been playing around with PHP and MySQL databases on the subdomian site and would like to know if it is possible to run a database driven (i.e. blog) on one of my 1and1 sites. I could upgrade my package but if I'm only going to gain database functionality I'm not sure if I want to do it. Also, as 1and1 don't use cPanel I'm wondering how the databases would be managed...but I'll worry about that when (if) the time comes. Cheers!

    Read the article

  • Connect android to database

    - by danny
    I am doing a school project where we need to create an android application which needs to connect to a database. the application needs to gain and store information for people's profiles on the database. But unfortunatly we are a little bit stuck at this point because there are numerous ways to link the application such as http request through apache or through the SOAP/REST protocol. But it's really hard to find good instructions or tutorials on the problem since I can't really find them. Maybe that's cause i'm probably using the wrong words on google. Unfortunately I have little relevant information. So if anyone can help me with finding relevant links to good online tutorials or howto's than those are very welcome.

    Read the article

  • Writing datatable to database file, one record at a time in C#

    - by Kevin
    Hi! I want to write a C# program that will read a row from a datatable (named loadDT) and update a database file (named Forecasts.mdb). My datatable looks like this (each day's value is a number representing kilowatts usage forecast): Hour Day1 Day2 Day3 Day4 Day5 Day6 Day7 1 519 520 524 498 501 476 451 My database file looks like this: Day Hour KWForecast 1 1 519 2 1 520 3 1 524 ... and so on. Basically, I want to be able to read one row from the datatable, and then extrapolate that out to my database file, one record at a time. Each row from the datatable will result in seven records written to the database file. Any ideas on how to go about this? I can connect to my database, the connection string works, and I can update and delete from the database. I just can't wrap my head around how to do this one record at a time. Thanks in advance for any help and advice.

    Read the article

  • Postgres pg_dump dumps database in a different order every time

    - by behrk2
    Hello, I am writing a PHP script (which also uses linux bash commands) which will run through test cases by doing the following: I am using a PostgreSQL database (8.4.2)... 1.) Create a DB 2.) Modify the DB 3.) Store a database dump of the DB (pg_dump) 4.) Do regression testing by doing steps 1.) and 2.), and then take another database dump and compare it (diff) with the original database dump from step number 3.) However, I am finding that pg_dump will not always dump the database in the same way. It will dump things in a different order every time. Therefore, when I do a diff on the two database dumps, the comparison will result in the two files being different, when they are actually the same, just in a different order. Is there a different way I can go about doing the pg_dump? Thanks!

    Read the article

  • Writing datatable to database file, one record at a time

    - by Kevin
    I want to write a C# program that will read a row from a datatable (named loadDT) and update a database file (named Forecasts.mdb). My datatable looks like this (each day's value is a number representing kilowatts usage forecast): Hour Day1 Day2 Day3 Day4 Day5 Day6 Day7 1 519 520 524 498 501 476 451 My database file looks like this: Day Hour KWForecast 1 1 519 2 1 520 3 1 524 ... and so on. Basically, I want to be able to read one row from the datatable, and then extrapolate that out to my database file, one record at a time. Each row from the datatable will result in seven records written to the database file. Any ideas on how to go about this? I can connect to my database, the connection string works, and I can update and delete from the database. I just can't wrap my head around how to do this one record at a time.

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • Managing multiple customer databases in ASP.NET MVC application

    - by Robert Harvey
    I am building an application that requires separate SQL Server databases for each customer. To achieve this, I need to be able to create a new customer folder, put a copy of a prototype database in the folder, change the name of the database, and attach it as a new "database instance" to SQL Server. The prototype database contains all of the required table, field and index definitions, but no data records. I will be using SMO to manage attaching, detaching and renaming the databases. In the process of creating the prototype database, I tried attaching a copy of the database (companion .MDF, .LDF pair) to SQL Server, using Sql Server Management Studio, and discovered that SSMS expects the database to reside in c:\program files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabaseName.MDF Is this a "feature" of SQL Server? Is there a way to manage individual databases in separate directories? Or am I going to have to put all of the customer databases in the same directory? (I was hoping for a little better control than this). NOTE: I am currently using SQL Server Express, but for testing purposes only. The production database will be SQL Server 2008, Enterprise version. So "User Instances" are not an option.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Why won't C# accept a (seemingly) perfectly good Sql Server CE Query?

    - by VoidKing
    By perfectly good sql query, I mean to say that, inside WebMatrix, if I execute the following query, it works to perfection: SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER('o'), ''))) / len('o') AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%o%' UNION SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER('o'), ''))) / len('o') AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%o%' UNION SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER('o'), ''))) / len('o') AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%o%' Here i am using 'o' as a static search string to search upon. No problem, but not exeactly very dynamic. Now, when I write this query as a string in C# and as I think it should be (and even as I have done before) I get a server-side error indicating that the string was not in the correct format. Here is a pic of that error: And (although I am only testing the output, should I get it to quit erring), here is the actual C# (i.e., the .cshtml) page that queries the database: @{ Layout = "~/Layouts/_secondaryMainLayout.cshtml"; var db = Database.Open("Content"); string searchText = Request.Unvalidated["searchText"]; string selectQueryString = "SELECT page AS location, (len(page) - len(replace(UPPER(page), UPPER(@0), ''))) / len(@0) AS occurences, 'pageSettings' AS tableName FROM PageSettings WHERE page LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT pageTitle AS location, (len(pageTitle) - len(replace(UPPER(pageTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'ExternalSecondaryPages' AS tableName FROM ExternalSecondaryPages WHERE pageTitle LIKE '%' + @0 + '%' "; selectQueryString += "UNION "; selectQueryString += "SELECT eventTitle AS location, (len(eventTitle) - len(replace(UPPER(eventTitle), UPPER(@0), ''))) / len(@0) AS occurences, 'MainStreetEvents' AS tableName FROM MainStreetEvents WHERE eventTitle LIKE '%' + @0 + '%'"; @:beginning <br/> foreach (var row in db.Query(selectQueryString, searchText)) { @:entry @:@row.location &nbsp; @:@row.occurences &nbsp; @:@row.tableName <br/> } } Since it is erring on the foreach (var row in db.Query(selectQueryString, searchText)) line, that heavily suggests that something is wrong with my query, however, everything seems right to me about the syntax here and it even executes to perfection if I query the database (mind you, un-parameterized) directly. Logically, I would assume that I have erred somewhere with the syntax involved in parameterizing this query, however, my double and triple checking (as well as, my past experience at doing this) insists that everything looks fine here. Have I messed up the syntax involved with parameterizing this query, or is something else at play here that I am overlooking? I know I can tell you, for sure, as it has been previously tested, that the value I am getting from the query string is, indeed, what I would expect it to be, but as there really isn't much else on the .cshtml page yet, that is about all I can tell you.

    Read the article

  • How to structure a database with questions and answers?

    - by Andreas Johannessen
    Hi I am going to make a simple application that uses a database. I could need some guidance on how to structure it. I shall make question program. What I have in mind is. One table with questions One table with the difficulity of the question One table with the category of the question However, what do I do with the answers? Have them as seperate columns in the question-table? It sounds like a bad practice.(Also, where do I have the correct answer) Each question will have 5 answers where only one of them is correct.

    Read the article

  • Connect to Oracle database on a different server from PHP

    - by macha
    Hello I have a database engine sitting on a remote server, while my webserver is present locally. I have worked pretty much with client-server architecture, where the server has both the webserver and the database engine. Now I need to connect to an Oracle database which is situated on a different server. Can anybody give me any suggestions?? I believe ODBC_CONNECT might not work. Do I use OCI8 drivers?? How would I connect to my database server. Also I would have a very high number of database calls going back and forth, so is it good to go with persistent connection or do I still use individual database calls?

    Read the article

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    Hello, We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • SQL 2008 Replication

    - by sevenlamp
    I have replicated two database in SQL2008. The last two days the distribution database from publisher server changed to Suspect Mode and the replication process is down. I read a lot from Google and tried to repair, but without success. During my repairing process, the Distribution Database is has changed to Emergency mode. Can anyone please kindly suggest any solutions?

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • Is adding a bit mask to all tables in a database useful?

    - by Tom
    A colleague is adding a bit mask to all our database tables. In theory this is so we can track certain properties of each row across the entire system. For example... Is the row shipped with the system or added by the client once they've started using the system Has the row been deleted from the table (soft deletes) Is the row a default value within a set of rows Is this a good idea? Are there other uses where this approach would be beneficial? My preference is these properties are obviously important, and having a dedicated column for each property is justified to make what is happening clearer to fellow developers.

    Read the article

  • Certificates in SQL Server 2008

    - by Brandi
    I need to implement SSL for transmissions between my application and Sql Server 2008. I am using Windows 7, Sql Server 2008, Sql Server Management Studio, and my application is written in c#. I was trying to follow the MSDN page on creating certificates and this under 'Encrpyt for a specific client', but I got hopelessly confused. I need some baby steps to get further down the road to implementing encryption successfully. First, I don't understand MMC. I see a lot of certificates in there... are these certificates that I should be using for my own encryption or are these being used for things that already exist? Another thing, I assume all these certificates are files are located on my local computer, so why is there a folder called 'Personal'? Second, to avoid the above issue, I did a little experiment with a self-signed assembly. As shown in the MSDN link above, I used SQL executed in SSMS to create a self-signed certificate. Then I used the following connection string to connect: Data Source=myServer;Initial Catalog=myDatabase;User ID=myUser;Password=myPassword;Encrypt=True;TrustServerCertificate=True It connected, worked. Then I deleted the certificate I'd just created and it still worked. Obviously it was never doing anything, but why not? How would I tell if it's actually "working"? I think I may be missing an intermediate step of (somehow?) getting the file off of SSMS and onto the client? I don't know what I'm doing in the least bit, so any help, advice, comments, references you can give me are much appreciated. Thank you in advance. :)

    Read the article

  • SQL - an error occurred during the pre-login handshake

    - by Rivka
    Until yesterday evening, I was able to connect to my server from my local machine. Now, I get the following error: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.) (.Net SqlClient Data Provider) Note, I can log on to the actual server with no problem. Yesterday, I installed IIS on my machine and set up a site using my IP address - don't know if this has anything to do. I did come across this article, followed the steps, but didn't seem to help. http://www.escapekeys.com/blog/index.cfm/2011/1/26/Microsoft-SQL-Server-Error-64-A-connection-was-successfully-established-with-the-server I also went through the following article, changed TC/IP settings, restarted, but nothing. http://blog.sqlauthority.com/2009/05/21/sql-server-fix-error-provider-named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server-microsoft-sql-server-error/ Started trying suggestions from comments too but stopped when I realized I might be messing things up more. So, why is this happening / how can I fix?

    Read the article

  • Big table or multiple separate tables? (database design question)

    - by Khou
    This is a database design question. I want to build an invoice web application, an invoice can have many items, and each user can have an inventory list of product items that they can store and choose to add to an invoice item. My questions are: 1. Should I store all product inventory for all users using my application under one single table? Or have a separate product inventory table created for each user? 2. Is this even possible? 1 table is easier, but what if this single table grows too big, will I have a problem? (primary key INT).

    Read the article

  • Get Rails to save a record to the database in a non-UTC time

    - by Shaun
    Is there a way to get Rails to save records to the database without it automagically converting the timestamp into UTC before saving? The problem is that I have a few models that pull data from a legacy database that saves everything in Mountain Time and occasionally I have to have my Rails app write to that database. The problem is that every time it does, it converts the time I give it from Mountain Time to UTC, which is 6-7 hours ahead (depending on DST)! Needless to say, this really messes with reporting on that database. If I could get around doing this, I would. Unfortunately, I can't do anything about the fact that this other database uses a different timezone, nor can I really get away from the need for this app to save to that database occasionally. If I could just get Rails to stop trying to help me, it'd be great.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >