Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 185/1506 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Login failed for user 'XXX' on the mirrored sql server

    - by hp17
    We have 4 web servers that host our asp.net (3.5) application. Randomly, we get error messages like : 1) "Login failed for user 'userid'" 2) "A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)" we are running sql2005 and have a principle and a mirror db (sync). When these exceptions are thrown, I look at the SQL error logs on the mirrored db and noticed the failed login messages in there. The principle db is running fine and the other web apps are working great. this will happen for maybe 10 min, then the app pool recycles and it starts hitting the principle db again. Is there a configuration I have incorrect? my theory is that our principle db is forwarding the request to the mirror, but that should never happen. any help??

    Read the article

  • agent-based simulation: performance issue: Python vs NetLogo & Repast

    - by max
    I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point = self.point max_sugar = self.world.sugar_map[self.point].level min_range = 0 random.shuffle(self.all_directions) for r in range(1, self.vision+1): for d in self.all_directions: p = ((self.point[0] + r * d[0]) % self.world.surface.length, (self.point[1] + r * d[1]) % self.world.surface.height) if self.world.occupied(p): # checks if p is in a lookup table (dict) continue if self.world.sugar_map[p].level > max_sugar: max_sugar = self.world.sugar_map[p].level max_sugar_point = p if max_sugar_point is not self.point: self.move(max_sugar_point) Roughly equivalent code in NetLogo (this fragment does a bit more than the Python function above): ; -- The SugarScape growth and motion procedures. -- to M ; Motion rule (page 25) locals [ps p v d] set ps (patches at-points neighborhood) with [count turtles-here = 0] if (count ps > 0) [ set v psugar-of max-one-of ps [psugar] ; v is max sugar w/in vision set ps ps with [psugar = v] ; ps is legal sites w/ v sugar set d distance min-one-of ps [distance myself] ; d is min dist from me to ps agents set p random-one-of ps with [distance myself = d] ; p is one of the min dist patches if (psugar >= v and includeMyPatch?) [set p patch-here] setxy pxcor-of p pycor-of p ; jump to p set sugar sugar + psugar-of p ; consume its sugar ask p [setpsugar 0] ; .. setting its sugar to 0 ] set sugar sugar - metabolism ; eat sugar (metabolism) set age age + 1 end On my computer, the Python code takes 15.5 sec to run 1000 steps; on the same laptop, the NetLogo simulation running in Java inside the browser finishes 1000 steps in less than 6 sec. EDIT: Just checked Repast, using Java implementation. And it's also about the same as NetLogo at 5.4 sec. Recent comparisons between Java and Python suggest no advantage to Java, so I guess it's just my code that's to blame? EDIT: I understand MASON is supposed to be even faster than Repast, and yet it still runs Java in the end.

    Read the article

  • How to write Sql or LinqToSql for this scenario?

    - by Mike108
    How to write Sql or LinqToSql for this scenario? A table has the following data: Id UserName Price Date Status 1 Mike 2 2010-4-25 0:00:00 Success 2 Mike 3 2010-4-25 0:00:00 Fail 3 Mike 2 2010-4-25 0:00:00 Success 4 Lily 5 2010-4-25 0:00:00 Success 5 Mike 1 2010-4-25 0:00:00 Fail 6 Lily 5 2010-4-25 0:00:00 Success 7 Mike 2 2010-4-26 0:00:00 Success 8 Lily 5 2010-4-26 0:00:00 Fail 9 Lily 2 2010-4-26 0:00:00 Success 10 Lily 1 2010-4-26 0:00:00 Fail I want to get the summary result from the data, the result should be: UserName Date TotalPrice TotalRecord SuccessRecord FailRecord Mike 2010-04-25 8 4 2 2 Lily 2010-04-25 10 2 2 0 Mike 2010-04-26 2 1 1 0 Lily 2010-04-26 8 3 1 2 The TotalPrice is the sum(Price) groupby UserName and Date The TotalRecord is the count(*) groupby UserName and Date The SuccessRecord is the count(*) groupby UserName and Date where Status='Success' The FailRecord is the count(*) groupby UserName and Date where Status='Fail' The TotalRecord = SuccessRecord + FailRecord The sql server 2005 database script is: /****** Object: Table [dbo].[Pay] Script Date: 04/28/2010 22:23:42 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Pay]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[Pay]( [Id] [int] IDENTITY(1,1) NOT NULL, [UserName] [nvarchar](50) COLLATE Chinese_PRC_CI_AS NULL, [Price] [int] NULL, [Date] [datetime] NULL, [Status] [nvarchar](50) COLLATE Chinese_PRC_CI_AS NULL, CONSTRAINT [PK_Pay] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) END GO SET IDENTITY_INSERT [dbo].[Pay] ON INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (1, N'Mike', 2, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (2, N'Mike', 3, CAST(0x00009D6300000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (3, N'Mike', 2, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (4, N'Lily', 5, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (5, N'Mike', 1, CAST(0x00009D6300000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (6, N'Lily', 5, CAST(0x00009D6300000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (7, N'Mike', 2, CAST(0x00009D6400000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (8, N'Lily', 5, CAST(0x00009D6400000000 AS DateTime), N'Fail') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (9, N'Lily', 2, CAST(0x00009D6400000000 AS DateTime), N'Success') INSERT [dbo].[Pay] ([Id], [UserName], [Price], [Date], [Status]) VALUES (10, N'Lily', 1, CAST(0x00009D6400000000 AS DateTime), N'Fail') SET IDENTITY_INSERT [dbo].[Pay] OFF

    Read the article

  • Please advise on VPS config choice (with SQL Server Express)

    - by tjeuten
    Hi all, I might be interested in getting a VPS hosting plan for some small personal sites and .NET projects. Was thinking of Softsys Bronze Plan, as my current shared host plan is with them too. The stuff I want to host has grown beyond the capabilities of a Shared hosting plan, and I also want more control over the IIS/ASP.NET configuration, that's why I'm considering VPS. The main config details would be: Hyper-V 30 GB of diskspace 1 GB of RAM More info here: http://www.softsyshosting.com/Windows-VPS-HyperV.aspx Does anyone have experience with this plan (or something similar from another host), and maybe could answer these couple of questions: Bronze has a total diskspace of 30GB. Is the OS part of this quota or not ? If so, how much does a base configuration with Windows 2008 take up in diskspace ? Would you advise Windows 2008 R2 or Normal. Or would you advise to use Windows 2003 with this config. I'm planning on running a SQL Server Express install too. Would 1 GB of RAM be enough for both the Windows 2008 (R2) and SQL Express. The database load will not be that very high (a couple of 1000 records returned each day). The DB will most likely be far away from the 4GB limit, that's why I'd go for a SQL Express instead of paying extra licensing costs for a SQL Web install. But I'm more concerned about performance. Would you recommend Softsys as a VPS host ? I've been with them for one year for my Shared hosting plan, and have no complaints so far. Also, as I have no VPS experience, what are the pitfalls I need to be aware of, in terms of performance mainly, but maybe in other areas too ? Mathieu

    Read the article

  • Slow INFORMATION_SCHEMA query

    - by Thomas
    We have a .NET Windows application that runs the following query on login to get some information about the database: SELECT t.TABLE_NAME, ISNULL(pk_ccu.COLUMN_NAME,'') PK, ISNULL(fk_ccu.COLUMN_NAME,'') FK FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk_tc ON pk_tc.TABLE_NAME = t.TABLE_NAME AND pk_tc.CONSTRAINT_TYPE = 'PRIMARY KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE pk_ccu ON pk_ccu.CONSTRAINT_NAME = pk_tc.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS fk_tc ON fk_tc.TABLE_NAME = t.TABLE_NAME AND fk_tc.CONSTRAINT_TYPE = 'FOREIGN KEY' LEFT JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE fk_ccu ON fk_ccu.CONSTRAINT_NAME = fk_tc.CONSTRAINT_NAME Usually this runs in a couple seconds, but on one server running SQL Server 2000, it is taking over four minutes to run. I ran it with the execution plan enabled, and the results are huge, but this part caught my eye (it won't let me post an image): http://img35.imageshack.us/i/plank.png/ I then updated the statistics on all of the tables that were mentioned in the execution plan: update statistics sysobjects update statistics syscolumns update statistics systypes update statistics master..spt_values update statistics sysreferences But that didn't help. The index tuning wizard doesn't help either, because it doesn't let me select system tables. There is nothing else running on this server, so nothing else could be slowing it down. What else can I do to diagnose or fix the problem on that server?

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • SQL Server 2008 R2 mirroring failing

    - by andriusn
    I have two Windows 2008 R2 (Amazon EC2) instances running SQL Server 2008 R2. I use 9TB striped disks (9x1TB EBS volumes) for storage. One server is running as principal and second one as mirror. Both started from the same image, database and tlog files located on striped disk. Mirror server failed 3 times in last 2 months with errors: EventID 823 The operating system returned error 2(The system cannot find the file specified.) to SQL Server during a write at offset 0x00000048058a00 in file 'D:\TLogs***_log.ldf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. and EventID 1454 Database mirroring will be suspended. Server instance 'xxxxxxxxxx' encountered error 823, state 6, severity 24 when it was acting as a mirroring partner for database '***'. The database mirroring partners might try to recover automatically from the error and resume the mirroring session. For more information, view the error log for additional error messages. followed by EventID 19019 The MSSQLSERVER service terminated unexpectedly. After this rebooting instance is necessary to restore mirroring. First two times I thought it was hardware related (striped disk failure) and relaunched instance on new hardware. But the issue is back after few weeks again. It only affects mirror instances. Any help would be really appreciated. Thanks.

    Read the article

  • Can Sql Server 2005 Pivot table have nText passed into it?

    - by manemawanna
    Right bit of a simple question can I input nText into a pivot table? (SQL Server 2005) What I have is a table which records the answers to a questionnaire consisting of the following elements for example: UserID QuestionNumber Answer Mic 1 Yes Mic 2 No Mic 3 Yes Ste 1 Yes Ste 2 No Ste 3 Yes Bob 1 Yes Bob 2 No Bob 3 Yes With the answers being held in nText. Anyway what id like a Pivot table to do is: UserID 1 2 3 Mic Yes No Yes Ste Yes No Yes Bob Yes No Yes I have some test code, that creates a pivot table but at the moment it just shows the number of answers in each column (code can be found below). So I just want to know is it possible to add nText to a pivot table? As when I've tried it brings up errors and someone stated on another site that it wasn't possible, so I would like to check if this is the case or not. Just for further reference I don't have the opportunity to change the database as it's linked to other systems that I haven't created or have access too. Heres the SQL code I have at present below: DECLARE @query NVARCHAR(4000) DECLARE @count INT DECLARE @concatcolumns NVARCHAR(4000) SET @count = 1 SET @concatcolumns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @concatcolumns = (@concatcolumns + ' + ') SET @concatcolumns = (@concatcolumns + 'CAST ([' + CAST(@count AS NVARCHAR) + '] AS NVARCHAR)') SET @count = (@count+1) END DECLARE @columns NVARCHAR(4000) SET @count = 1 SET @columns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @columns = (@columns + ',') SET @columns = (@columns + '[' + CAST(@count AS NVARCHAR) + '] ') SET @count = (@count+1) END SET @query = ' SELECT UserID, ' + @concatcolumns + ' FROM( SELECT UserID, QuestionNumber AS qNum from QuestionnaireAnswers where QuestionnaireID = 7 ) AS t PIVOT ( COUNT (qNum) FOR qNum IN (' + @columns + ') ) AS PivotTable' select @query exec(@query)

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

  • Why better isolation level means better performance in MS SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead.

    Read the article

  • Is this slow WPF TextBlock performance expected?

    - by Ben Schoepke
    Hi, I am doing some benchmarking to determine if I can use WPF for a new product. However, early performance results are disappointing. I made a quick app that uses data binding to display a bunch of random text inside of a list box every 100 ms and it was eating up ~15% CPU. So I made another quick app that skipped the data binding/data template scheme and does nothing but update 10 TextBlocks that are inside of a ListBox every 100 ms (the actual product wouldn't require 100 ms updates, more like 500 ms max, but this is a stress test). I'm still seeing ~10-15% CPU usage. Why is this so high? Is it because of all the garbage strings? Here's the XAML: <Grid> <ListBox x:Name="numericsListBox"> <ListBox.Resources> <Style TargetType="TextBlock"> <Setter Property="FontSize" Value="48"/> <Setter Property="Width" Value="300"/> </Style> </ListBox.Resources> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> <TextBlock/> </ListBox> </Grid> Here's the code behind: public partial class Window1 : Window { private int _count = 0; public Window1() { InitializeComponent(); } private void OnLoad(object sender, RoutedEventArgs e) { var t = new DispatcherTimer(TimeSpan.FromSeconds(0.1), DispatcherPriority.Normal, UpdateNumerics, Dispatcher); t.Start(); } private void UpdateNumerics(object sender, EventArgs e) { ++_count; foreach (object textBlock in numericsListBox.Items) { var t = textBlock as TextBlock; if (t != null) t.Text = _count.ToString(); } } } Any ideas for a better way to quickly render text? My computer: XP SP3, 2.26 GHz Core 2 Duo, 4 GB RAM, Intel 4500 HD integrated graphics. And that is an order of magnitude beefier than the hardware I'd need to develop for in the real product.

    Read the article

  • Performance of tokenizing CSS in PHP

    - by Boldewyn
    This is a noob question from someone who hasn't written a parser/lexer ever before. I'm writing a tokenizer/parser for CSS in PHP (please don't repeat with 'OMG, why in PHP?'). The syntax is written down by the W3C neatly here (CSS2.1) and here (CSS3, draft). It's a list of 21 possible tokens, that all (but two) cannot be represented as static strings. My current approach is to loop through an array containing the 21 patterns over and over again, do an if (preg_match()) and reduce the source string match by match. In principle this works really good. However, for a 1000 lines CSS string this takes something between 2 and 8 seconds, which is too much for my project. Now I'm banging my head how other parsers tokenize and parse CSS in fractions of seconds. OK, C is always faster than PHP, but nonetheless, are there any obvious D'Oh! s that I fell into? I made some optimizations, like checking for '@', '#' or '"' as the first char of the remaining string and applying only the relevant regexp then, but this hadn't brought any great performance boosts. My code (snippet) so far: $TOKENS = array( 'IDENT' => '...regexp...', 'ATKEYWORD' => '@...regexp...', 'String' => '"...regexp..."|\'...regexp...\'', //... ); $string = '...CSS source string...'; $stream = array(); // we reduce $string token by token while ($string != '') { $string = ltrim($string, " \t\r\n\f"); // unconsumed whitespace at the // start is insignificant but doing a trim reduces exec time by 25% $matches = array(); // loop through all possible tokens foreach ($TOKENS as $t => $p) { // The '&' is used as delimiter, because it isn't used anywhere in // the token regexps if (preg_match('&^'.$p.'&Su', $string, $matches)) { $stream[] = array($t, $matches[0]); $string = substr($string, strlen($matches[0])); // Yay! We found one that matches! continue 2; } } // if we come here, we have a syntax error and handle it somehow } // result: an array $stream consisting of arrays with // 0 => type of token // 1 => token content

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • SqlCe odd results why? -- Same SQL, different results in different apps. Issue with

    - by NitroxDM
    When I run this SQl in my mobile app I get zero rows. select * from inventory WHERE [ITEMNUM] LIKE 'PUMP%' AND [LOCATION] = 'GARAGE' When I run the same SQL in Query Analyzer 3.5 using the same database I get my expected one row. Why the difference? Here is the code I'm using in the mobile app: SqlCeCommand cmd = new SqlCeCommand(Query); cmd.Connection = new SqlCeConnection("Data Source="+filePath+";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); cmd.Connection.Open(); SqlCeDataReader tmpRdr = cmd.ExecuteReader(); if (tmpRdr.Read()) tmpTable.Load(tmpRdr); tmpRdr.Close(); cmd.Connection.Close(); return tmpTable; UPDATE: For the sake of trying I used the code found in one of the answers found here and it works as expected. So my code looks like this: SqlCeConnection conn = new SqlCeConnection("Data Source=" + filePath + ";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); SqlCeDataAdapter AD = new SqlCeDataAdapter(Query, conn); AD.Fill(tmpTable); The issue appears to be with the SqlCeDataReader. Hope this helps someone else out!

    Read the article

  • Win7 Credential manager and accessing SQL Server from outside of the domain

    - by David Lively
    My SQL Server is set to use windows authentication. If I am connected to the domain directly from my Win7 Ultimate x64 machine, SQL Management Studio (SSMS) will let me authenticate with Windows authentication. However, if I am connected via the VPN (from a different machine that is not joined to the domain), it won't. If I start SSMS with the following command line: C:\Windows\system32>runas /netonly /user:domainname\username "C:\Program Files (x86)\Microsoft SQL...\ssms.exe" then connecting to the SQL Server (which is in the domain) with Windows Authentication works fine. I'd like to save these credentials so that I don't have to launch SSMS from the command line, or modify the shortcut. I know I can use the SysInternals ShellRunAs extension to do this, but I again have to enter my domain username and password each time, and shift+right-click to see that menu option. The Windows Credential Manager seems designed to solve this problem, and works for network shares. However, it doesn't seem to work for SSMS. Any suggestions? I've tried using the /savecred option with runas to create the necessary credentials, but that appears to be incompatible with the /netonly option. Running the above command line with the addition of /savecred just displays the runas help screen. Grrr. Argh.

    Read the article

  • Can't create add a SQL Server user: The login already has an account under a different user name.

    - by Zian Choy
    Environment: SQL Server 2005 Express Windows 7 When I installed SQL Server, I followed the instructions at http://msdn.microsoft.com/en-us/library/aa905868.aspx to set my computer's admin account as the SQL Server admin. However, when I try to access a database on my computer through Visual Studio 2008, I get the following error message: --------------------------- Microsoft Visual Studio --------------------------- The database 'Parkinsons' does not exist or you do not have permission to see it. Would you like to attempt to create it? --------------------------- Yes No --------------------------- Then, if I go to SQL Server and add a user to that database, I get the following error message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Create failed for User 'zian'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.2047.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Create+User&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.Express.ConnectionInfo) ------------------------------ The login already has an account under a different user name. (Microsoft SQL Server, Error: 15063) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=15063&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Why doesn't VS piggy back on the dbo account? If the dbo account is unusable, then why won't SQL Server let me make an account so that I can access my own data?

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Intermittent SQL Server ODBC Timeout expired

    - by Wili
    We have a bunch of VB6 applications that access two different database servers (both 32-bit windows 2003, one SQL Server 2000, one SQL Server 2005). About every ten minutes or so, we are getting a few errors: [Microsoft][ODBC SQL Server Driver]Timeout expired [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. [Microsoft][ODBC SQL Server Driver]ConnectionRead() This is happening on more than a dozen different computers at random times. We also have IP phones that all run through the same network and those are not having any problems. We can also VNC into a users computer and reproduce the error they were getting, but VNC still continues to work. Email also works. It just seems to be an ODBC connection to SQL Server that causes the issue. The errors happen for both of our SQL Servers. We have scoured google, but haven't been able to come up with a solution. Is there anything we can try to diagnose the problem? Is there any fix out there?

    Read the article

  • Is it possible to use SqlGeography with Linq to Sql?

    - by cofiem
    I've been having quite a few problems trying to use Microsoft.SqlServer.Types.SqlGeography. I know full well that support for this in Ling to Sql is not great. I've tried numerous ways, beginning with what would the expected way (Database type of geography, CLR type of SqlGeography). This produces the NotSupportedException, which is widely discussed via blogs. I've then gone down the path of treating the geography column as a varbinary(max), as geography is a UDT stored as binary. This seems to work fine (with some binary reading and writing extension methods). However, I'm now running into a rather obscure issue, which does not seem to have happened to many other people. System.InvalidCastException: Unable to cast object of type 'Microsoft.SqlServer.Types.SqlGeography' to type 'System.Byte[]'. This error is thrown from an ObjectMaterializer when iterating through a query. It seems to only occur when the tables containing geography columns are included in a query implicitly (ie. using the EntityRef<> properties to do joins). System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() My question: If I'm retrieving the geography column as varbinary(max), I might expect the reverse error: can't cast byte[] to SqlGeography. That I would understand. This I don't. I do have some properies on the partial LINQ to SQL classes that hide the binary conversion... could those be the issue? Any help appreciated, and I know there's probably not enough information.

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >