Search Results

Search found 113039 results on 4522 pages for 'database sql server'.

Page 172/4522 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • SQL Server Agent 2005 job runs but no output

    - by alimack
    Essentially I have a job which runs in BIDS and as as a stand lone package and while it runs under the SQL Server Agent it doesn't complete properly (no error messages though). The job steps are: 1) Delete all rows from table; 2) Use For each loop to fill up table from Excel spreasheets; 3) Clean up table. I've tried this [MS page][1] (steps 1 & 2), didn't see any need to start changing from Server side security. Also SQLServerCentral.com for [this page][2], no resolution. How can I get error logging or a fix? Note I've reposted this from Server Fault as it's one of those questions that's not pure admin or programming. I have logged in as the proxy account I'm running this under, and the job runs stand alone but complains that the Excel tables are empty?

    Read the article

  • Selecting a sequence NEXTVAL for multiple rows

    - by stringpoet
    I am building a SQL Server job to pull data from SQL Server into an Oracle database through a linked server. The table I need to populate has a sequence for the name ID, which is my primary key. I'm having trouble figuring out a way to do this simply, without some lengthy code. Here's what I have so far for the SELECT portion (some actual names obfuscated): SELECT (SELECT NEXTVAL FROM OPENQUERY(MYSERVER, 'SELECT ORCL.NAME_SEQNO.NEXTVAL FROM DUAL')), psn.BirthDate, psn.FirstName, psn.MiddleName, psn.LastName, c.REGION_CODE FROM Person psn LEFT JOIN MYSERVER..ORCL.COUNTRY c ON c.COUNTRY_CODE = psn.Country MYSERVER is the linked Oracle server, ORCL is obviously the schema. Person is a local table on the SQL Server database where the query is being executed. When I run this query, I get the same exact value for all records for the NEXTVAL. What I need is for it to generate a new value for each returned record. I found this similar question, with its answers, but am unsure how to apply it to my case (if even possible): Query several NEXTVAL from sequence in one satement

    Read the article

  • Data validation between SQL replicated tables

    - by Vikram
    Hi All, I am trying to figure out a way to validate the data in my publisher and subscriber in SQL 2005 replication. I thought of using sp_publication_validation, but it needs db_owner permission and we are not allowed to have it in our company. So I did bit more reading and found out about two other SPs that I think work for me. First one is sp_article_validation, which I plan to run on the publisher. For each article that I call this SP, its gonna give the row count and a checksum. With that info, I intend to call sp_table_validation on the subscriber, passing the row count and checksum from the previous SP, there by validating both tables. What do you guys think? Is this a proven way to validate data in replication? There is very little documentation on these SPs. Here is the link: sp_table_validation - http://msdn.microsoft.com/en-us/library/aa239370(v=sql.80).aspx sp_article_validation - http://msdn.microsoft.com/en-us/library/ms177511(v=SQL.90).aspx Thanks for

    Read the article

  • NetBackup-pal is muködik az Oracle Database 11gR2 mentés Exadata V2 környezetben

    - by Fekete Zoltán
    A Veritas NetBackup szoftverrel is menthetok az Oracle 11gR2 adatbázisok az Oracle Enterprise Linux-on is (RMAN-t használva), 64-bites környezetben. A dokumentumokban a Red Hat-re vonatkozó infót kell keresnünk, mivel http://seer.entsupport.symantec.com/docs/337048.htm szerint "Oracle Enterprise Linux (OEL)" Supported based on NetBackup Red Hat Enterprise Linux 4.x/5.x Client, Server, and Oracle Agent support. BMR is not supported. NetBackup compatibility listák: http://seer.entsupport.symantec.com/docs/303344.htm - A NetBackup 7 kompatibilis az Oracle Exadata V2-vel: http://seer.entsupport.symantec.com/docs/340295.htm - A NetBackup 6.x verziókra telepíteni kell a következo patch-et: NB_6.5.5_ET1940073_1_347227.zip is a NetBackup 6.5.5 EEB (Emergency Engineering Binary) for Oracle Clients. http://seer.entsupport.symantec.com/docs/347227.htm és http://support.veritas.com/docs/279048.

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Databases and the CI server

    - by mlk
    I have a CI server (Hudson) which merrily builds, runs unit tests and deploys to the development environment but I'd now like to get it running the integration tests. The integration tests will hit a database and that database will be consistently being changed to contain the data relevant to the test in question. This however leads to a problem - how do I make sure the database is not being splatted with data for one test and then that data being override by a second project before the first set of tests complete? I am current using the "hope" method, which is not working out too badly at the moment, but mostly due to the fact that we only have a small number of integration tests set up on CI. As I see it I have the following options: Test-local (in memory) databases I'm not sure if any in-memory databases handle all the scaryness of Oracles triggers and packages etc, and anything less I don't feel would be a worth while test. CI Executor-local databasesA fair amount of work would be needed to set this up and keep 'em up to date, but defiantly an option (most of the work is already done to keep the current CI database up-to-date). Single "integration test" executorLikely the easiest to implement, but would mean the integration tests could fall quite far behind. Locking the database (or set of tables) I'm sure I've missed some ways (please add them). How do you run database-based integration tests on the CI server? What issues have you had and what method do you recommend? (Note: While I use Hudson, I'm happy to accept answers for any CI server, the ideas I'm sure will be portable, even if the details are not). Cheers,      Mlk

    Read the article

  • Oracle Database Appliance - How to Sell a Unique Product : Webcast Replay

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Learn about: ODA Benefits : Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins : What can we learn? Objection Handling : Overcoming the most common customer questions Going beyond the Database: The ODA Eco System for applications, backup & more If you missed the  webcasts in April, go on the EMEA VAD Resource Center - Enablement Tab, click here and follow the instruction to access the replay.

    Read the article

  • Looking to create website that can have custom GUI and database per user

    - by riley3131
    I have developed an MS Access database for a company to track data in regards to production of a certain commodity. It has many many tables, forms, reports, etc. These were all done as the user requested, and resemble the users previously used system, mostly printed worksheets and excel workbooks. This has created a central location for all information and has allowed the company to compare data in a new way. I am now looking to do this for other companies, but would like to switch it to a web application. Here is my question. What is the best way to create unique solutions for individual companies that can have around 100 users each? I would love to create one site that would serve all parties, but that would ruin the customizable nature of what I am developing. I love the ability to create reports, excel sheets, pdf, graphs, etc with access, but am tired of relying on my customers software, servers, etc. I have some experience with WAMP, but I am far better at VBA. I was okay at PHP, and was getting a grasp on JavaScript a few years back. I am also trying to decide whether to go with WAMP or LAMP, if web is the best choice. Also, should I try set up one site for all users that after log-in goes to company specific pages, or individual sites for each company? Should I host or use a service?

    Read the article

  • ????Java EE????WebLogic Server???????????|WebLogic Channel|??????

    - by ???02
    20????????????????????????/???????????Java EE???WebLogic Server????????????????IT(????)?????????????????????????????????????/??????????????????????????????????????????/?????????????????????????2011?9?6???????????????????WebLogic & Java EE????????????????????????????Java EE?WebLogic Server????????????(???)?WebLogic Suite?????/?????????????????? ???Java????????????·?????????????????WebLogic & Java EE?????????????????????????? Fusion Middleware?????????????????Oracle WebLogic Server???????????????????????????WebLogic Server????????????????·?????????????????????? ???WebLogic Server??????????????Standard Edition?????????????Enterprise Edition??????????·??????·????????WebLogic Suite??3??????????????????????????WebLogic Suite??????????????WebLogic Server???????????/??????JRockit Flight Recorder????Mission Control???????·???·??????Oracle Coherence????????????????Java??????JRockit Real Time????????????????Oracle Enterprise Manager??????????????????????????????????????????????????????????·??????·??????????????????·???????? ????????????????????WebLogic Server?????????????????????????????·????????????????·??????2????????2??????????????IT???????????????????????????????????????·????????????????IT??????????????????????????????????·???????????????????IT??????????????????????????????????Java????????·????????????????????????????????????????????????????????? ??????????????????????????????????????????????"???"??????????????WebLogic Suite???????????????????????????????????????????????????????????????(???) ???????WebLogic Server?????????????????WebLogic Suite 11g?????????????????????????????????Oracle Database??????????????????4?????? ???????????????????Java EE 5????????????Java EE 6???????????????Eclipse????????????????FastSwap???????????????????????????????????????????????????????? ?????????????????????????JRockit Flight Recorder????????????·????????????????????????????????????????·??????????????????????????????????? Oracle Database?????????????Oracle Real Application Clusters(RAC)??????????Active GridLink for RAC?????????????????RAC???????????????????????·????????????????????????????·?????????????????????????????????RAC?????????WebLogic Server????????????????????????????????????????????????????????????????????????????????????????? ?????????????????·???????2011????????????????WebLogic Server 10.3.6???2012?????????????????WebLogic Server 12.1.1????????WebLogic Server 10.3.6???????·???????Oracle Exalogic Elastic Cloud???WebLogic Server 12.1.1?Java????????????????????????? WebLogic Server 10.3.6????????????Oracle Virtual Assembly Builder???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????WebLogic Server 10.3.6?????????????????InfiniBand????????Socket Direct Protocol??????????? ??????????·???????WebLogic Server 12.1.1???Java EE 6??????????????????Web?????????????????????Java EE 6?????????WebLogic Server????????????????????? ??????????????????????????????WebLogic Suite????????????????????????????????·??????·????????WebLogic Server????????????Java EE????6????????! ????????Java EE 6????-??????????????????????? Fusion Middleware?????? ???Java???????????????????Java??????????????????? ????????????????????????――?Java EE??????????????????????2000???????EJB??????Java EE?????????????????Web???????????????????Struts???Spring Framework??????????????????????????????????Java EE?"?"???????????????????????????????????????????????????????????????????????????????????XML??????????????????????????Java EE??????????(=??)??????? ??????????????????????????????????J2EE 1.4??????Java EE 5???????"??????"???????????????????????????Java EE 6??????????????"??????"???????????????????????????2009?12???????????????????????????(???) ?????????????????????????Java EE 6????????????????????????????????????????????????4?????? ?????????????Java EE???????????????????????????????????????????web.xml?????????????????Java EE 6??????????????????web.xml??????????????????????????????????????????????????????????????????????????????? ????Java EE????????????????????????????Java EE 6??????Web??????????????????Web????????????Java EE??????????????????????????????????????/????????Web???????????????????????? ????????????????????Java EE???????·??????????JavaServer Faces(JSF) 2.0???????????????????????????????????? JSF 2.0??????????????????·????????XHTML???????????????????????·???????(UI)??????????????JSF 1.2???Java Server Pages(JSP)????????????????????????????????????????JSF 2.0???Facelet??????XHTML?????????????????????????????????? ?????EJB??????????????????????????EJB 3.1?????????EJB???????????????????????????????????????·????????????????Java SE???EJB?????????????????????????? ??????????????????????????????????????????·???????????Java EE 5????????Java EE 6?????????Web??????????????????????·??????????Tomcat??????????????·??????????????(???)???????????????????Web?????????????Tomcat?????????????????????? ???????????????Tomcat???????Java Servlet?JSP?Expression Language?????????????????Struts???????????????Web????????????????????????????????????????Web??????????????????????????????????????????????????????????????Java EE 6????????????????????????????? ?Java EE 6?Web???????????Java EE?????????????????Web????????????????????Java EE 6???????WebLogic Server????????????????????????????????????????Web??????????????????????(???) ?????????????????????????????????????????????????????????Java EE 6?????????????????????Java EE 6???"??????"?????????????JRockit Flight Recorder/Mission Control?????????! ??????????????? Fusion Middleware?????????????????Oracle WebLogic Server 11g?????????-???????WebLogic??????!??????????????WebLogic Server?????????????????????????????????????????????????????????????????????????JRockit Flight Recorder?JRockit Mission Control????????????·??????????????? ????????JRockit Flight Recorder(JFR)??Java??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??JFR????????????????????SLA???????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????JRockit?????????????JRockit Mission Control(JMC)?????JFR??????????JFR????????·???????????????????? JFR?????????????????????????????????????????????????????????????????????????·????????????????????????????????????????·?????????????????????????????????????????????????????4???????????????????????JFR???????????????????????????????????????? ???JMC?Eclipse??????????????????????????????????????????·?????????????????????????????????????Eclipse??????JMC????????????????·???????????????????·?????????????????????????????????????? ??????JMC?Eclipse?????????????????????????????????????????UI????????????????????????????????????????????????????????????????????????????????????????????????: ?????????! ?????????????????JRockit Flight Recorder?????*   *   * ???????WebLogic & Java EE??????????????????Java?WebLogic Server????3??????????????????????????????????·?????????????????????????

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • SQL Server 2005 Reporting Services (x64) on Windows 2K8 -> CleanCurrentUserName() not found

    - by Steven Pardo
    I have installed SQL Server 2005 three times now on the same box. I cleaned up registry settings, files, you name it. All along I have been trying to install SQL Server 2005 Database and Reporting Services (x64) on a Windows 2008 Server. I have also applied the SP3 patch. Installing and Restarting the Server at every point. I have installed multiple instances (SQLDEV64, SQLQA64, SQLSTAGE64) of the Database and Reporting Services. I started to go through the Reporting Services Configuration manager, installing the Reporting Database along with setting up IIS. When I go test the website I get the following and there lies my question. How can I get around this error? http://localhost/reportserver Reporting Services Error -------------------------------------------------------------------------------- An internal error occurred on the report server. See the error log for more details. (rsInternalError) Method not found: 'Void Microsoft.ReportingServices.Diagnostics.UserUtil.CleanCurrentUserName()'. -------------------------------------------------------------------------------- SQL Server Reporting Services Any help would be greatly appreciated.

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Managing Data Growth in SQL Server

    'Help, my database ate my disk drives!'. Many DBAs spend most of their time dealing with variations of the problem of database processes consuming too much disk space. This happens because of errors such as incorrect configurations for recovery models, data growth for large objects and queries that overtax TempDB resources. Rodney describes, with some feeling, the errors that can lead to this sort of crisis for the working DBA, and their solution.

    Read the article

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    - by jamiet
    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it. I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]: The two tables have exactly the same definition, both a have a single column called [Id] of type integer. CREATE TABLE [dbo].[Table1] (     [Id] INT NOT NULL PRIMARY KEY ) My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results: CREATE PROCEDURE [dbo].[Procedure1] AS     SELECT t1.*     FROM    Table1 t1     INNER JOIN Table2 t2         ON    t1.Id = t2.Id     ORDER BY Id Now if I create those three objects manually and then execute the stored procedure, it works fine: So we know that the code works. Unfortunately, SSDT thinks that there is an error here: The text of that error is: Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id]. Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day. Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version. Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say! Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed? @Jamiet

    Read the article

  • Auditing DDL Changes in SQL Server databases

    Even where Source Control isn't being used by developers, it is still possible to automate the process of tracking the changes being made to a database and put those into Source Control, in order to track what changed and when. You can even get an email alert when it happens. With suitable scripting, you can even do it if you don't have direct access to the live database. Grant shows how easy this is with SQL Compare.

    Read the article

  • ???????/???Web????????????!??????????????

    - by Yusuke.Yamamoto
    ????? ??:2010/11/04 ??:??????/?? ???????????????????·????????????????????????????????????????????????????????Web?????????????????????????????????????????????????????DB?? Oracle TimesTen In-Memory Database ???????????????????????????? ????????????????????????Oracle In-Memory Database Cache ?????Oracle TimesTen IMDB / Oracle IMDB Cache 11g ?????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/TT11041100.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/TimesTen_OrD_20101104_print.pdf

    Read the article

  • .Net Windows Service Throws EventType clr20r3 system.data.sqlclient.sql error

    - by William Edmondson
    I have a .Net/c# 2.0 windows service. The entry point is wrapped in a try catch block yet when I look at the server's application event log I seem a number of "EventType clr20r3" errors that are causing the service to die unexpectedly. The catch block has a "catch (Exception ex)". Each sql commands is of the type "CommandType.StoredProcedure" and are executed with SqlDataReader's. These sproc calls function correctly 99% of time and have all been thoroughly unit tested, profiled, and QA'd. I additionally wrapped these calls in try catch blocks just to be sure and am still experiencing these unhandled exceptions. This only in our production environment and cannot be duplicated in our dev or staging environments (even under heavy load). Why would my error handling not catch this particular error? Is there anyway to capture more detail as to the root cause of the problem? Here is an example of the event log: EventType clr20r3, P1 RDC.OrderProcessorService, P2 1.0.0.0, P3 4ae6a0d0, P4 system.data, P5 2.0.0.0, P6 4889deaf, P7 2490, P8 2c, P9 system.data.sqlclient.sql, P10 NIL. Additionally The Order Processor service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service.

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • Database design for summarized data

    - by holden
    I have a new table I'm going to add to a bunch of other summarized data, basically to take some of the load off by calculating weekly avgs. My question is whether I would be better off with one model over the other. One model with days of the week as a column with an additional column for price or another model as a series of fields for the DOW each taking a price. I'd like to know which would save me in speed and/or headaches? Or at least the trade off. IE. ID OBJECT_ID MON TUE WED THU FRI SAT SUN SOURCE OR ID OBJECT_ID DAYOFWEEK PRICE SOURCE

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >