Search Results

Search found 40829 results on 1634 pages for 'sql reporting services'.

Page 115/1634 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • window services works after local account used before this account

    - by Gauls
    Hi All of a sudden my window services app after installation does not start(Some services stop automatically if they have no work to do) uses custom user, if i change the logon setting to use local system account and click start works fine, then when i go back and change to use this account (local user- custom user under user group) and then click start it works fine, why didn't work in the first place???????????? Thanks

    Read the article

  • Difference between Software Services & IT Consulting.

    - by Rohit
    I have been looking into sites of IT companies but I am confused with the terms they use for their offerings. Some write: "Software Services, IT Consulting", some write: "Technology, Consulting", some write" "Product engineering, Application Development". Can someone clarify what is the difference between: (1) Software services & IT Consulting. (2) Technology and Consulting.

    Read the article

  • Windows XP "automatic" services not starting

    - by Mala
    Hi I have a fresh install of WinXP. The main problem is that every time I start it up, I have to go into Administrative Tools and start the needed services, such as DCOM, RTP, DHCP, etc etc. The only services that start automatically are: plug and play remote procedure server windows audio workstation All of the rest have to be started manually, in spite of the fact that they're listed as "automatic". Why won't they start on their own like they should? Thanks, Mala

    Read the article

  • Solaris: Is it OK to disable font services?

    - by cjavapro
    Is it OK to disable these services? # svcs -l '*font*' fmri svc:/application/font/stfsloader:default name Standard Type Services Framework (STSF) Font Server loader enabled true state online next_state none state_time Sun May 30 17:58:14 2010 restarter svc:/network/inetd:default fmri svc:/application/font/fc-cache:default name FontConfig Cache Builder enabled true state online next_state none state_time Sun May 30 17:58:15 2010 logfile /var/svc/log/application-font-fc-cache:default.log restarter svc:/system/svc/restarter:default dependency require_all/none svc:/system/filesystem/local (online) dependency require_all/refresh file://localhost/etc/fonts/fonts.conf (online) dependency require_all/none file://localhost/usr/bin/fc-cache (online) #

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • Collation errors in business

    - by Rob Farley
    At the PASS Summit last month, I did a set (Lightning Talk) about collation, and in particular, the difference between the “English” spoken by people from the US, Australia and the UK. One of the examples I gave was that in the US drivers might stop for gas, whereas in Australia, they just open the window a little. This is what’s known as a paraprosdokian, where you suddenly realise you misunderstood the first part of the sentence, based on what was said in the second. My current favourite is Emo Phillip’s line “I like to play chess with old men in the park, but it can be hard to find thirty-two of them.” Essentially, this a collation error, one that good comedians can get mileage from. Unfortunately, collation is at its worst when we have a computer comparing two things in different collations. They might look the same, and sound the same, but if one of the things is in SQL English, and the other one is in Windows English, the poor database server (with no sense of humour) will get suspicious of developers (who all have senses of humour, obviously), and declare a collation error, worried that it might not realise some nuance of the language. One example is the common scenario of a case-sensitive collation and a case-insensitive one. One may think that “Rob” and “rob” are the same, but the other might not. Clearly one of them is my name, and the other is a verb which means to steal (people called “Nick” have the same problem, of course), but I have no idea whether “Rob” and “rob” should be considered the same or not – it depends on the collation. I told a lie before – collation isn’t at its worst in the computer world, because the computer has the sense to complain about the collation issue. People don’t. People will say something, with their own understanding of what they mean. Other people will listen, and apply their own collation to it. I remember when someone was asking me about a situation which had annoyed me. They asked if I was ‘pissed’, and I said yes. I meant that I was annoyed, but they were asking if I’d been drinking. It took a moment for us to realise the misunderstanding. In business, the problem is escalated. A business user may explain something in a particular way, using terminology that they understand, but using words that mean something else to a technical person. I remember a situation with a checkbox on a form (back in VB6 days from memory). It was used to indicate that something was approved, and indicated whether a particular database field should store True or False – nothing more. However, the client understood it to mean that an entire workflow system would be implemented, with different users have permission to approve items and more. The project manager I’d just taken over from clearly hadn’t appreciated that, and I faced a situation of explaining the misunderstanding to the client. Lots of fun... Collation errors aren’t just a database setting that you can ignore. You need to remember that Americans speak a different type of English to Aussies and Poms, and techies speak a different language to their clients.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • Install Sql Server Developer Edition 32-bit (or Enterprise Edition) on Windows 7 Home Premium 64-bit

    - by ali62b
    Is there any work around to Successfully install SQL server 2008 32-bit on Windows 7 Home premium 64-bit ? If this is the case I first installed VS 2008 SP 1 on my machine and when I click on install.exe file for installing SQL Server 2008 (Developer Edition) I get an error related to .NET Framework version which is installed already on my PC. { I get the same error trying to install Enterprise Edition}

    Read the article

  • How do I get SQL Profiler to show statements with column names like 'password'?

    - by Kev
    I'm profiling a database just now and need to see the UPDATE and INSERT statements being executed on a particular table. However, because the table has a 'Password' column the SQL Profiler is being understandingly cautious and replacing the TextData column with: -- 'password' was found in the text of this event. -- The text has been replaced with this comment for security reasons. How do I prevent it doing this because I need to see the SQL statement being executed?

    Read the article

  • Alias a linked Server in SQL server management studio?

    - by absentmindeduk
    Hoping someone can help - is there a way in SQL server management studio 2008 R2 that I can alias a linked SQL server? I have a server, added by IP address, to which I do not have the login credentials - however as the connection is already setup I can login ok. Issue is that, this is a dev environment, prior to a live deployment and the IP I have as a linked server needs to be 'accessible' by my stored procs under a different name, eg 'myserver' not 192.168.xxx.xxx... Any help much appreciated.

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • Integrated Security on Reporting Services XML Datasource

    - by Nathan
    Hey all, I am working on setting up my report server to use a web service as an XML datasource. I seem to be having authentication issues between the web service and the report with I choose to use Integrated security. Here's what I have: 1) I have a website w/ an exposed service. This website is configured to run ONLY on Integrated Security. This means that we have all other modes turned off AND Enabled anonymous access turned off under directory security. 2) Within the Web.config of the website, I have the authentication mode set to Windows. 3) I have the report datasource set to being an XML data source. I have the correct URL to the service and have it set to Windows Integrated Security. Since I am making a hop from the Browser to the Reporting Server to the Web Service, I wonder if I am having an issue w/ Kerberos, but I am not sure. When I try to access the service, I get a 401 error. Here are the IIS logs that I am generating: 2011-01-07 14:52:12 W3SVC IP_ADDY POST /URL.asmx - 80 - IP_ADDY - 401 1 0 2011-01-07 14:52:12 W3SVC IP_ADDY POST /URL.asmx - 80 - IP_ADDY - 401 1 5 Has anyone worked out this issue before? Thanks!

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • SQL SERVER Configure Management Data Collection in Quick Steps T-SQL Tuesday #005

    This article was written as a response to T-SQL Tuesday #005 Reporting.The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Yet Another SQL Strategy for Versioned Data

    There is a popular design for a database that requires a built-in audit-trail of amendments and additions, where data is never deleted, but merely superseded by a later version. Whilst this is conceptually simple, it has always made for complicated SQL for reporting the latest version of data. Alex joins the debate on the best way of doing this with an example using an indexed view and the filtered index.

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >