Search Results

Search found 95301 results on 3813 pages for 'server client'.

Page 324/3813 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Deployment of a .NET application making use of SQL Server 2008

    - by Victor John Saliba
    I have searched the internet thoroughly for this type of issue, there were responses but hasn't really found a concrete solution yet. I have an application which makes use of SQL Server 2008 R2 and thus it makes connections with a database file which I have set up. The application executes successfully, makes connections with the database and retrieves/inserts/updates data to and fro the database. However when I come to create a deployment project i.e. a setup project, I fail to transfer my database files to other computers and make database connections. I have checked the SQL Server 2008 prerequisite in the publish settings of the application and has also included the database files. Can anyone suggest the best way to this type of setup? Thanks

    Read the article

  • How to get SQL Profiler to monitor trigger execution

    - by firedfly
    I have a trace setup for SQL Server Profiler to monitor SQL that is executed on a database. I recently discovered that trigger execution is not included in the trace. After looking through available events for a trace, I do not see any that look like they would include trigger execution. Does anyone know how to setup a trace to monitor the execution of triggers?

    Read the article

  • SQL Server 2008, not enough disk space

    - by snorlaks
    Hello, I'm executing sql query on my database. I have SQL Server 2008 installed on my D harddrive which has 55 GB free space. I have also C drive which has sth like 150 MB free (right now). While executing that query on quite a big table (16 GB) I have an error: An error occurred while executing batch. Error message is: Not enough disk space. I would like to know if there is any possibility that I can make SQL Server to use D drive instead of C Or maybe there is any other problem with what I'm doing ? Thanks for help

    Read the article

  • Max tcp/ip connections on Windows Server 2008

    - by zendar
    I have .Net service that listens on single port over TCP protocol. Clients connect and then transmit data for some time (from few minutes to several hours). Is there any limit on number of connections on Windows 2008 server? I did not hit any, since now there is up to 50 users. Plan is to have thousands of users, so I'd like to know if there will be problems in future. Edit: As Cloud answered, it seems that there are some limits in some versions of Windows Server 2008. Is there any reference on those limits? I tried Google, but it returns articles on limit on half-bound tcp connections.

    Read the article

  • hash password in SQL Server (asp.net)

    - by ile
    Is this how hashed password stored in SQL Server should look like? This is function I use to hash password (I found it in some tutorial) public string EncryptPassword(string password) { //we use codepage 1252 because that is what sql server uses byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password); byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes); return Encoding.GetEncoding(1252).GetString(hashBytes); } EDIT I tried to use sha-1 and now strings seem to look like as they are suppose to: public string EncryptPassword(string password) { return FormsAuthentication.HashPasswordForStoringInConfigFile(password, "sha1"); } // example output: 39A43BDB7827112409EFED3473F804E9E01DB4A8 Result from the image above looks like broken string, but this sha-1 looks normal.... Will this be secure enough?

    Read the article

  • How to get the position of a record in a table (SQL Server)

    - by Peter Siegmann
    Following problem: I need to get the position of a record in the table. Let's say I have five record in the table: Name: john doe, ID: 1 Name: jane doe, ID: 2 Name: Frankie Boy, ID: 4 Name: Johnny, ID: 9 Now, "Frankie Boy" is in the third position in the table. But How to get this information from the SQL server? I could count IDs, but they are not reliable, Frankie has the ID 4, but is in the third position because the record with the ID '3' was deleted. Is there a way? I am aware of ROW_RANK but it would be costly, because I need to select basically the whole set first before I can rank row_rank them. I am using MS SQL Server 2008 R2.

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • SQL Server mirroring connection doesnt work

    - by StNickolas
    I have 2 servers srv-erp1 and srv-erp3. I made them mirror on each other. All setup is done by lots of tutorials and examples. But when I call ALTER DATABASE MIRROR_TEST SET PARTNER = 'TCP://srv-erp3:5022' It`s response is: The server network address "TCP://srv-erp3:5022" can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. I go to cmd on srv-erp3 and use netstat -an... this port is listening. I go to cmd on srv-erp1 and use telnet srv-erp3 5022...and its ok to connect. All firewalls are turned off. The only difference in config of srvrs is that srv-erp1 is on Windows Server 2003 R2 x64, and srv-erp3 is on Windows 2008 R2 x64 What can be the reason of this problem? Regards, Dmitry.

    Read the article

  • Composite keys as Foreign Key?

    - by paulio
    I have the following table... TABLE: Accounts ID (int, PK, Identity) AccountType (int, PK) Username (varchar) Password (varchar) I have created a composite key out of ID and AccountType columns so that people can have the same username/password but different AccountTypes. Does this mean that for each foreign table that I try and link to I'll have to create two columns? I’m using SQL Server 2008

    Read the article

  • Help to convert PostgreSQL dates into SQL Server dates

    - by Earlz
    Hello I'm doing some data conversion from PostgreSQL to Microsoft SQL Server. So far it has all went well and I almost have the entire database dump script running. There is only one thing that is now messed up: dates. The dates are dumped to a string format. These are two example formats I've seen so far: '2008-01-14 12:00:00' and the more precise '2010-04-09 12:23:45.26525' I would like a regex (or set of regexs) that I could run so that will replace these with SQL Server compatible dates. Anyone know how I can do that?

    Read the article

  • T-SQL: from rows to columns but not an actual pivot

    - by Matte
    Is there a T-SQL (SQL Server 2008R2) query to transform TABLE_1 into the expected resultset? TABLE_1 +----------+-------------------------+---------+------+ | IdDevice | Timestamp | M300 | M400 | +----------+-------------------------+---------+------+ | 3 | 2012-12-05 16:29:51.000 | 2357,69 | 520 | | 6 | 2012-12-05 16:29:51.000 | 1694,81 | 470 | | 1 | 2012-12-05 16:29:51.000 | 2046,33 | 111 | +----------+-------------------------+---------+------+ Expected resultset +-------------------------+---------+--------+---------+--------+---------+--------+ | Timestamp | 3_M300 | 3_M400 | 6_M300 | 6_M400 | 6_M300 | 6_M400 | +-------------------------+---------+--------+---------+--------+---------+--------+ | 2012-12-05 16:29:51.000 | 2357,69 | 520 | 1694,81 | 470 | 2046,33 | 111 | +-------------------------+---------+--------+---------+--------+---------+--------+

    Read the article

  • Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing

    - by Paresh
    I am getting the error from the application as following with SQL server 2005 "Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0" How can i find the stage where this error raised? how can i found the missing transaction or the stored procedure where it is not committ or rollback?

    Read the article

  • After Delete Trigger Fires Only After Delete?

    - by Brandi
    I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation... I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it. By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete. So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone? All the trigger looks like is this: ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger] GO The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.

    Read the article

  • SQL Server and iPad app interaction

    - by Phanindar
    I have to write an app for iPad that would take data from SQL Server and post it to the iPad. I looked up on this over the Internet and found that i have to write a web service to expose the data from SQL server using ASP.NET. I did an app previously in android that would take data from my dropbox a/c and display it to the user. I made use of the drop-box api available. I was wondering if anything like that exists for SQL? Also, i have to code in Obj-C for the iPad, so how will i write ASP.NET code? I have more doubts. Thanks in advance.

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

  • Multiple ports listed in SQL Server connection string

    - by BBlake
    I have a legacy VB6 app where the servername, databasename, username, etc are defined in an INI file, but the port number for the connection string (the default 1433) is hard coded in the app. It's being moved to a new sql server back end that runs off a different port number. I'm trying to avoid having to alter and recompile the application which entails signifigant retesting, documentation, etc. I tried altering the INI file so that for the new server I have put in: SERVERNAME\INSTANCE,NEWPORTNUMBER This effectively builds the connection with Data Source = SERVERNAME\INSTANCE,NEWPORTNUMBER,1433; This appears to work correctly as it connects to the database when I run the app. It appears to me that the ,1433 portion is being ignored. Is this a valid assumption or will this cause me some problem I'm not seeing here?

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >