Search Results

Search found 111506 results on 4461 pages for 'sql server driver for node js'.

Page 140/4461 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • SQL Server tempdb size seems large, is this normal?

    - by Abe Miessler
    From what I understand the system database is used to hold temporary tables, intermediate results and other temporary information. On one of my database instances I have a tempdb that is seems very large (30GB). This database has not been modified (as in "last modified date" on the mdf file) in over a week. Is it normal to have the temp db remain that large for that long of a period? It seems to me that it should be updating fairly often and returning space that it is using fairly quickly... Am I way off here or is SQL Server doing something weird? FYI: This is a SharePoint 2010 database, not sure if that makes a difference.

    Read the article

  • Transferring DHCP using Windows Server Migration Tool - Why is Powershell is crashing on the import of the .mig file?

    - by Mike
    I am migrating DHCP from a windows server 2003R2 DC to a Windows Server 2008R2 DC I've followed this video and its predecessor (Installing Windows Server Migration Tools) http://technet.microsoft.com/en-us/video/migrating-dhcp-using-the-windows-server-2008-r2-migration-tools.aspx I went through everything smoothly until the last step. I have exported a .mig file with my DHCP configuration on the old 2003r2 server. I transferred this .mig file over to my 2008R2 server, when running the import command, it will appear to work for a minute or two and then I get a generic windows "Powershell has stopped working" error and I have to close the program. Under the problem details I see the following: FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.AccessViolationException OutermostExceptionType: System.AccessViolationException DeepestPowerShellFrame: unknown OS Version: 6.1.7600.2.0.0.272.7 LocaleID: 1033 Seems like there are permissions issues maybe? I am running powershell as an admin and am logged in to the server as a domain administrator. Any Ideas? Thanks

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • SQL Server 2012 and SQLMail - will it still work?

    - by Kharlos Dominguez
    We are considering upgrading our SQL Server, which is currently running 2005. We use SQLMail heavily in the organization, both to send e-mails and to import some into a database. I've read on various places that SQLMail was deprecated and superseded by "Database Mail". I'm confused because this MS page: http://msdn.microsoft.com/en-us/library/bb402904.aspx seems to imply that it would still work? I understand the dangers of SQLMail but we do not have the resources to rewrite the scripts right now and would prefer to do it later on. Does SQLMail still work in 2012, and if not, how easy is it to replace with Database Mail, both for reading and sending e-mails?

    Read the article

  • How to connect to database on remote server

    - by user137263
    Where there is VPN to remote server and then access to the database via local network interface, how can one establish a remote link between one's computer (with a programme such as Visual Studio 2010) and SQL Server (e.g. 2008 R2) ? Any attempts to create a direct link to the SQL Server are blocked. Whilst the SQL Server can be configured to allow external access, this provides its own host of problems. Any help would be much appreciated.

    Read the article

  • How to use symbolic links in windows server 2008R2 across the network (mklink)

    - by server info
    I have One Server (Srv1) which holds data with file shares and the storage is full. Now I have second Server (Srv2) which has alot more space. No I would like to transfer all the data von Serv1 to Serv2 and have links to the new destination. I found mklink very useful here but unfortunately it does not work over the network. Which also points the docu out. People heavily rely on the path's so it would be helpful if somone has a pointer for me... how to handle symbolic links a cross the network with Windows Servers. I am running Windows Server 2008. Thanks for any help

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • Installing SQL Server 2012 on Windows 2012 Server

    - by andyleonard
    In Want to Learn SQL Server 2012? I wrote about obtaining a fully-featured version of SQL Server 2012 (Developer Edition). This post represents one way to install SQL Server 2012 Developer Edition on a Hyper-V virtual machine running the Windows 2012 Server Standard Edition operating system. This is by no means exhaustive. My goal in writing this is to help you get a default instance of SQL Server 2012 up and running. I do not cover setting up the Hyper-V virtual machine. I begin after loading the...(read more)

    Read the article

  • Detach a filter driver from certain drivers?

    - by Protector one
    The driver for my laptop's keyboard has a kernel-mode filter driver from Synaptics (SynTP.sys) attached. Is it possible to detach the SynTP.sys filter driver from my keyboard's driver, without detaching it from my Touchpad's driver? This Microsoft Support page explains how to completely disable a filter driver, but my touchpad requires SynTP.sys as well. I'm trying to do this is because the Synaptics driver disables my touchpad when I type. (Explained fully in this question: Use touchpad while "typing"?.) Since I don't have a solution to that problem, I figured removing the filter driver from the keyboard could prevent the Synaptics driver from detecting key strokes, thus stopping it from disabling the touchpad.

    Read the article

  • How do I restore a SQL Server database from last night's full backup and the active transaction log file?

    - by Dylan Beattie
    I have been told that it's good practise to keep your SQL Server data files and log files on physically separate disks, because it'll allow you to recover your data to the point of failure if the data drive fails. So... let's say that mydata.mdf is on drive D:, and my mydata_log.ldf is on drive E:, and it's 16:45, and drive D: has just died horribly. So - I have last night's full backup (mydata.bak). I have hourly transaction-log backups that will bring the data back up to 16:00... but that means I'll lose 45 minutes worth of updates. I still have mydata_log.ldf on the E: drive, which should contain EVERY transaction that was committed right up to the point where the drive failed. How do I go about recreating the database and restoring data from the backup file and the live transaction log, so I don't lose any updates? Is this possible?

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • jquerymobile - include .js and .html

    - by Girija
    Hi, I have described the my problem in the following lines. Kindly clarify me. In my application,I am using more than html page for displaying the content and each page have own .js file. When I call the html page then .js file also included. In the .js,I am using $('div').live('pageshow',function(){}). I am calling the html file from the .js(using $.mobile.changePage("htmlpage")). My problem : consider, I have two html files. second.html file is called with in the one.js. when I show the second.html, that time one.js is reload again. I am getting the alert "one.js" then "second.js" Please help me. Thanks in advance. Please point out my mistake. I have attached the code. one.html <!DOCTYPE html> <html> <head> <title>Page Title</title> <link rel="stylesheet" href="jquery.mobile-1.0a2.min.css" /> <script src="jquery-1.4.3.min.js"></script> <script src="jquery.mobile-1.0a2.min.js"></script> <script src="Scripts/one.js"></script> </head> <body> <div data-role="page"> </div> </body> </html> Second.html <!DOCTYPE html> <html> <head> <title>Sample </title> <link rel="stylesheet" href="../jquery.mobile-1.0a2.min.css" /> <script src="../jquery-1.4.3.min.js"></script> <script src="../jquery.mobile-1.0a2.min.js"></script> <script type="text/javascript" src="Scripts/second.js"></script> </head> <body> <div data-role="page"> <div data-role="button" id="link" >Second</div> </div><!-- /page --> </body> </html> one.js $('div').live('pageshow',function() { alert("one.js"); //AJAX Calling //success result than call the second.html $.mobile.changePage("second.html"); }); second.js $('div').live('pageshow',function(){ { alert('second.js'); //AJAX Calling //success result than call the second.html $.mobile.changePage("third.html"); }); Note : When I show forth.html that time the following files are reload(one.js,second.js,third,js and fourth.js. But I need fourth.js alone). I tried to use the $.document.ready(function(){}); but that time .js did not call. :( :( :(

    Read the article

  • Import IIS log into SQL Server 2008 error

    - by Vivek Chandraprakash
    I'm trying to import IIS logs into SQL Server 2008. I get this error below. Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column "cs(User-Agent)" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) I tried changing the column width of user agent to varchar(8000) and nvarchar(4000) no luck. pls help -Vivek

    Read the article

  • Memory Leak Issue in Weblogic, SUN, Apache and Oracle classes Options

    - by Amit
    Hi All, Please find below the description of memory leaks issues. Statistics show major growth in the perm area (static classes). Flows were ran for 8 hours , Heap dump was taken after 2 hours and at the end. A growth in Perm area was identified Statistics show from our last run 240MB growth in 6 hour,40mb growth every hour 2GB heap –can hold ¾ days ,heap will be full in ¾ days Heap dump show –growth in area as mentioned below JMS connection/session Area Apache org.apache.xml.dtm.DTM[] org.apache.xml.dtm.ref.ExpandedNameTable$ExtendedType org.jdom.AttributeList org.jdom.Content[] org.jdom.ContentList org.jdom.Element SUN * ConstantPoolCacheKlass * ConstantPoolKlass * ConstMethodKlass * MethodDataKlass * MethodKlass * SymbolKlass byte[] char[] com.sun.org.apache.xml.internal.dtm.DTM[] com.sun.org.apache.xml.internal.dtm.ref.ExtendedType java.beans.PropertyDescriptor java.lang.Class java.lang.Long java.lang.ref.WeakReference java.lang.ref.SoftReference java.lang.String java.text.Format[] java.util.concurrent.ConcurrentHashMap$Segment java.util.LinkedList$Entry Weblogic com.bea.console.cvo.ConsoleValueObject$PropertyInfo com.bea.jsptools.tree.TreeNode com.bea.netuix.servlets.controls.content.StrutsContent com.bea.netuix.servlets.controls.layout.FlowLayout com.bea.netuix.servlets.controls.layout.GridLayout com.bea.netuix.servlets.controls.layout.Placeholder com.bea.netuix.servlets.controls.page.Book com.bea.netuix.servlets.controls.window.Window[] com.bea.netuix.servlets.controls.window.WindowMode javax.management.modelmbean.ModelMBeanAttributeInfo weblogic.apache.xerces.parsers.SecurityConfiguration weblogic.apache.xerces.util.AugmentationsImpl weblogic.apache.xerces.util.AugmentationsImpl$SmallContainer weblogic.apache.xerces.util.SymbolTable$Entry weblogic.apache.xerces.util.XMLAttributesImpl$Attribute weblogic.apache.xerces.xni.QName weblogic.apache.xerces.xni.QName[] weblogic.ejb.container.cache.CacheKey weblogic.ejb20.manager.SimpleKey weblogic.jdbc.common.internal.ConnectionEnv weblogic.jdbc.common.internal.StatementCacheKey weblogic.jms.common.Item weblogic.jms.common.JMSID weblogic.jms.frontend.FEConnection weblogic.logging.MessageLogger$1 weblogic.logging.WLLogRecord weblogic.rjvm.BubblingAbbrever$BubblingAbbreverEntry weblogic.rjvm.ClassTableEntry weblogic.rjvm.JVMID weblogic.rmi.cluster.ClusterableRemoteRef weblogic.rmi.internal.CollocatedRemoteRef weblogic.rmi.internal.PhantomRef weblogic.rmi.spi.ServiceContext[] weblogic.security.acl.internal.AuthenticatedSubject weblogic.security.acl.internal.AuthenticatedSubject$SealableSet weblogic.servlet.internal.ServletRuntimeMBeanImpl weblogic.transaction.internal.XidImpl weblogic.utils.collections.ConcurrentHashMap$Entry Oracle XA Transaction oracle.jdbc.driver.Binder[] oracle.jdbc.driver.OracleDatabaseMetaData oracle.jdbc.driver.T4C7Ocommoncall oracle.jdbc.driver.T4C7Oversion oracle.jdbc.driver.T4C8Oall oracle.jdbc.driver.T4C8Oclose oracle.jdbc.driver.T4C8TTIBfile oracle.jdbc.driver.T4C8TTIBlob oracle.jdbc.driver.T4C8TTIClob oracle.jdbc.driver.T4C8TTIdty oracle.jdbc.driver.T4C8TTILobd oracle.jdbc.driver.T4C8TTIpro oracle.jdbc.driver.T4C8TTIrxh oracle.jdbc.driver.T4C8TTIuds oracle.jdbc.driver.T4CCallableStatement oracle.jdbc.driver.T4CClobAccessor oracle.jdbc.driver.T4CConnection oracle.jdbc.driver.T4CMAREngine oracle.jdbc.driver.T4CNumberAccessor oracle.jdbc.driver.T4CPreparedStatement oracle.jdbc.driver.T4CTTIdcb oracle.jdbc.driver.T4CTTIk2rpc oracle.jdbc.driver.T4CTTIoac oracle.jdbc.driver.T4CTTIoac[] oracle.jdbc.driver.T4CTTIoauthenticate oracle.jdbc.driver.T4CTTIokeyval oracle.jdbc.driver.T4CTTIoscid oracle.jdbc.driver.T4CTTIoses oracle.jdbc.driver.T4CTTIOtxen oracle.jdbc.driver.T4CTTIOtxse oracle.jdbc.driver.T4CTTIsto oracle.jdbc.driver.T4CXAConnection oracle.jdbc.driver.T4CXAResource oracle.jdbc.oracore.OracleTypeADT[] oracle.jdbc.xa.OracleXAResource$XidListEntry oracle.net.ano.Ano oracle.net.ns.ClientProfile oracle.net.ns.ClientProfile oracle.net.ns.NetInputStream oracle.net.ns.NetOutputStream oracle.net.ns.SessionAtts oracle.net.nt.ConnOption oracle.net.nt.ConnStrategy oracle.net.resolver.AddrResolution oracle.sql.CharacterSet1Byte we are using Oracle BEA Weblogic 9.2 MP3 JDK 1.5.12 Oracle versoin 10.2.0.4 (for oracle we found one path which is needed to applied to avoid XA transaction memory leaks). But we are stuck to resolve SUN, BEA Weblgogic and Apache leaks. please suggest... regards, Amit J.

    Read the article

  • SQL Server deadlocks between select/update or multiple selects

    - by RobW
    All of the documentation on SQL Server deadlocks talks about the scenario in which operation 1 locks resource A then attempts to access resource B and operation 2 locks resource B and attempts to access resource A. However, I quite often see deadlocks between a select and an update or even between multiple selects in some of our busy applications. I find some of the finer points of the deadlock trace output pretty impenetrable but I would really just like to understand what can cause a deadlock between two single operations. Surely if a select has a read lock the update should just wait before obtaining an exclusive lock and vice versa? This is happening on SQL Server 2005 not that I think this makes a difference.

    Read the article

  • SQL Server: Profiling statements inside a User-Defined Function

    - by Craig Walker
    I'm trying to use SQL Server Profiler (2005) to track down some application performance problems. One of the calls being made is to a table-valued user-defined function. This function wraps a select that joins several tables together. In SQL Server Profiler, the call to the UDF is logged. However, the select that underlies the UDF isn't being logged at all. Because of this, I'm not getting useful data on which tables & indexes are being hit. I'd like to feed this info into the Database Tuning Advisor for some indexing advice. Is there any way (short of unwrapping the queries themselves) to log the tables called by UDFs in Profiler?

    Read the article

  • ssms cannot connect to default sql server instance without specifying port number

    - by Oliver
    I have multiple SQL Server 2005 instances on a box. From SSMS on my desktop I can connect to that box's named instances with no problem. After some recent network configuration changes, when I want to connect to the default instance from SSMS on my desktop, I have to specify the port number. Before the network changes, I did not have to specify the port number of the default instance. If I remote to any other box (including the one in question), and use that box's SSMS to connect to that default instance, success. From my desktop, and only from my desktop, I have to specify the port number. Is it a SQL Server configuration that I've missed? Is it possible something in my PC's configuration is getting in the way? Where would I look, or what could I pass on to the network folks to help them resolve this? Any help is appreciated.

    Read the article

  • How do I insert into a unique key into a table?

    - by Ben McCormack
    I want to insert data into a table where I don't know the next unique key that I need. I'm not sure how to format my INSERT query so that the value of the Key field is 1 greater than the maximum value for the key in the table. I know this is a hack, but I'm just running a quick test against a database and need to make sure I always send over a Unique key. Here's the SQL I have so far: INSERT INTO [CMS2000].[dbo].[aDataTypesTest] ([KeyFld] ,[Int1]) VALUES ((SELECT Max([KeyFld]) FROM [dbo].[aDataTypesTest]) + 1 ,1) which errors out with: Msg 1046, Level 15, State 1, Line 5 Subqueries are not allowed in this context. Only scalar expressions are allowed. I'm not able to modify the underlying database table. What do I need to do to ensure a unique insert in my INSERT SQL code?

    Read the article

  • Replacing XML reserved characters in SQL Server 2005

    - by Barn
    I'm working on a system that takes relational data from a sql server DB and uses SSIS to produce an XML extract using sql server 2005's 'FOR XML PATH' command and a schema. The problem lies with replacing the XML reserved characters. 'FOR XML PATH' is only replacing <, , and &, not ' and ", so I need a way of replacing these myself. I've tried pre-processing the fields in the database to replace XML reserved characters with their entitised equivalents (e.g. & becomes &amp;), but once these fields are used to construct XML using FOR XML the leading & is replaced with &amp;, so I end up with &amp;amp; where I should have &amp;. What I've tried so far is altering the element's contents after the XML has been constructed using XQuery inside SQL server like so: DECLARE @data VARCHAR(MAX) SET @data = CONVERT(VARCHAR(MAX), [my xml column].query(' data(/root/node_i_want)') SELECT @data = [function to replace quotes etc](@data) SET [my xml column].modify('replace value of (/root/node_i_want)[1] with sql:variable("@data")') but I get the same problem. Essentially, is there something wrong I'm doing with the above, or a way to tell FOR XML to entitise other characters, or something like that? Basically anything short of having to write a program to change the XML after it has been assembled in large batches and saved to files!

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >