Search Results

Search found 36816 results on 1473 pages for 'sql pass'.

Page 251/1473 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • Team Foundation Server – How to pass ReferencePath argument to MSBuild

    - by Gopinath
    When we manually build a .NET project using Visual Studio, the reference paths set in Project Properties are picked up by Visual Studio for referring to dependent DLLs. But the project is built using TFS, the reference path’s specified in project properties are not considered. This is because Reference Paths are user specific settings and they are not stored in .proj files(they are stored in user settings files). The TFS build may break if it does not find the required DLLs in GAC. We can solve the problem by passing ReferencePath parameter to MSBuild in TFS build configurations. Go to Team Explorer Select Build Defintion >> Edit Build Definition Switch to Process tab Navigate to Advanced Section and locate MSBuild Arguments Add the following: /p:ReferencePath=”{File path}”

    Read the article

  • SQL Server: How do I generate the table schema and populate it with inserts in a script?

    - by Paula DiTallo
    Originally posted on: http://geekswithblogs.net/AskPaula/archive/2014/05/20/156469.aspx In SSMS, there's a Generate Script utility (read:  only available under version 2008 and up) . Here are the steps you would need to take to make use of the utility: Right click on the database you're interested in and go to Tasks -> Generate ScriptsSelect the tables and/or any other objects you'd like in order to get them into the script.Navigate to Set scripting options. Click on Advanced.Under the General category, navigate to Type of data to scriptSelect the Schema and Data option to get the insert statements generated. Click OK.

    Read the article

  • Visual Basic link to SQL output to Word

    - by CLO_471
    I am in need of some advice/references. I am currently trying to develop a legal document interface. There are certain fields in which I need to query out of my sql db and have those fields output into a document that can be printed. I am trying to develop a user interface where people can enter fields that will output to a document template but at the same time I need the template to be able to pull data from the SQL database. This is the reason why I think that VB might be my best choice and because it is one of the only OOP languages I am familiar with presently. Does anyone know that best way to be able to handle this type of job?? I know that you can use VBA within MS Word and have the form output variables to a word template. But, is there a way to have the word document also pull information from the SQL db? Is the best option to use VB linked to SQL and run queries to get the information from the database and then have it output to a for within VB? Is it possible for VB to be linked to a SQL db and output variables and SQL fields to a Word Template? I have looked into Mail Merge and I see that it allows users to pull data from an Access query but I dont think it would be easy to automate and it seems that users would need to have an advanced knowledge of MS Word and Access to handle this. I am not finding much useful information online so I came here. Any advice or references would be greatly appreciated. If there is a better way please let me know.

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • At what size of data does it become beneficial to move from SQL to NoSQL?

    - by wobbily_col
    As a relational database programmer (most of the time), I read articles about how relational databases don't scale, and NoSQL solutions such as MongoDB do. As most of the databases I have developed so far have been small to mid scale, I have never had a problem that hasn't been solved by some indexing, query optimization or schema redesign. What sort of size would I expect to see MySQL struggling with. How many rows? (I know this is going to depend on the application, and type of data stored. the one that got me thing was basically a genetics database, so would have one main table, with 3 or 4 lookup tables. The main table will contain amongst other things, a chromosome reference, and a position coordinate. It will likely get queried for a number of entries between two potions on a chromosome, to see what is stored there).

    Read the article

  • Revenue Recognition: Performance Obligation Pass a Hurdle

    - by Theresa Hickman
    I met up with Seamus Moran, our resident accounting expert, to get his thoughts about the latest happenings with IFRS. Last week, on March 13,  the comment period on the FASB and IASB exposure draft “Revenue From Contracts with Customers” closed.  FASB and IASB have just over 20 comment letters – a very small number.  The implication is that that the exposure draft does reflect general acceptance, and therefore will be published as both a US and Internationally Generally Accepted Accounting Standard. At a recent conference call, FASB and IASB expected to complete their report to both Boards on the comments by early summer, complete their deliberation of the comments by the fall and draft the final standard text by late this year. It is assumed the concept of Performance Obligations would become US GAAP and IFRS in place of the existing standards.  They confirmed that all existing US GAAP and IFRS guidelines would be withdrawn, and that they were in dialogue with the SEC on withdrawing the SEC guidelines on the revenue issue as well.The open question is when will Performance Obligations become effective?  The Boards have said that they would like this Revenue Recognition standard and the the Lease Accounting standard to be effective at the same time because what isn’t either insurance, interest, or a lease is a revenue arrangement.  However, ascertaining what is generally acceptable in respect of Leases is proving a little elusive, and the Boards have recently diverged a little on the P&L side of the accounting (although both are in agreement that there will be no off-balance sheet leases).  It is therefore likely that the Lease standard might be delayed. One wonders if the Boards will  define effectivity of the Revenue standard independently of the Lease standard or if they will stick with their resolve to make them co-effective.  The Boards have also said that neither standard will be effective before June 2015.Here is the gist of the new Revenue Recognition principle and the steps to apply it:Recognize revenue to depict the transfer of goods or services in an amount that reflects the consideration expected to be entitled in exchange for those goods and services.Steps to apply the core principles: Identify the contract with the customer Identify the separate performance obligations Determine the transaction price Allocate the the transaction price Recognize Revenue when a performance obligation is satisfied  

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Could someone help me understand SQL TDE Database encryption?

    - by SLC
    I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article.

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Material, Pass, Technique and shaders

    - by Papi75
    I'm trying to make a clean and advanced Material class for the rendering of my game, here is my architecture: class Material { void sendToShader() { program->sendUniform( nameInShader, valueInMaterialOrOther ); } private: Blend blendmode; ///< Alpha, Add, Multiply, … Color ambient; Color diffuse; Color specular; DrawingMode drawingMode; // Line Triangles, … Program* program; std::map<string, TexturePacket> textures; // List of textures with TexturePacket = { Texture*, vec2 offset, vec2 scale} }; How can I handle the link between the Shader and the Material? (sendToShader method) If the user want to send additionals informations to the shader (like time elapsed), how can I allow that? (User can't edit Material class) Thanks!

    Read the article

  • SOA, Could & Service Technology Symposium VIP pass 50% discount

    - by JuergenKress
    A series of podcasts, brought to you by Arcitura Education, SOASchool.com and CloudSchool.com in co-operation with the International Service Technology Symposium Conference Series, and the Prentice Hall Service Technology Series from Thomas Erl. As Part II of this Special Podcast Series, individuals will be able to tune into six distinct audio podcasts with expert speakers for the upcoming 5th International SOA, Cloud + Service Technology Symposium in London, UK on September 24-25, 2012. SOA, Cloud and Service Technology Symposium 2012 For Conference Details please visit the registration page Oracle promotion discount please enter during the registration the code DJMXZ370 Oracle Specialized SOA & BPM Partners at the conference: Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Cloud,SOA Governance,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • The overlooked OUTPUT clause

    - by steveh99999
    I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature.. The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this… SELECT somevalue FROM sometable WHERE keyval = XXX UPDATE sometable SET somevalue = newvalue WHERE keyval = XXX -- now check the update has worked… SELECT somevalue FROM sometable WHERE keyval = XXX This can be rewritten to achieve the same end-result using the OUTPUT clause. UPDATE sometable SET somevalue = newvalue OUTPUT deleted.somevalue AS ‘old value’,              inserted.somevalue AS ‘new value’ WHERE keyval = XXX The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO.  If you are not aware of the power of the output clause – I recommend you look at the output clause in books online And finally here’s an example of the output produced using the Northwind database…  

    Read the article

  • All Access Pass to Oracle Support

    - by Leslie-Oracle
    Untitled Document Looking for tips, recommendations and resources to help you keep your Oracle applications and systems running at peak performance? Want to find out how to get more out of your Oracle Premier Support coverage? More than 500 experts from across Services and Support will be on hand at Oracle OpenWorld to answer your questions and share best practices for adopting and optimizing Oracle technology. Find out what Oracle experts know about the best tools, tips and resources for supporting and upgrading Oracle technology. Attend one of our “Best Practices” sessions. Stop by the Oracle Support Stars Bar to talk with support experts. Open daily @ Moscone West, Exhibition hall 3161. See Oracle support tools in action at one of our demos. View the schedule of all of our Oracle Premier Support activities at Oracle OpenWorld for more information. See you there!

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • Couldn't pass the signin screen on ubuntu

    - by Amokrane
    I have an issue here with my computer using ubuntu 10.10 on a 64 bits machine. When I start it, I have the login screen, I enter my credentials but instead of starting the session it reloads the login screen again. I checked the disc using fsck and it seems clean. How should I proceed to diagnose and repair this issue? Thanks! [Edit] I went to the log files, this is what I got: auth.log pam_unix (gdm:session): session opened for user amokrane by (uid=0) pam_ck_connector (gdm:session): nox11 mode, ignoring PAM_TTY :0 pam_unix (gdm:session) :session closed for user amokrane messages.log No ACPI video bus found I also took a shot with my camera of the black screen that appears between the two login screens, it says something like: fsck from util-linux-ng 2.17.2 /dev/sdc4 : propre, xxxx files, xxxx blocs Starting AppArmor profiles Skipping profiles in /etc/apparmor.d/disable: usr.bin.firefox Setting sensors limits Starting postgreSQL ... /var/log/Xorg.0.log [ 25.375] (II) intel(0): Modeline "1920x1080"x60.0 172.80 ... [ 28.850] (II) Power Button: Close [ 28.850] (II) UnloadModule: "evdev" [ 29.910] (II) Power Button: Close [ 28.910] (II) UnloadModule: "evdev" [ 28.941] (II) AT Translated Set 2 keyboard: Close [ 29.000] (II) ImPS/2 Generic Wheel Mouse: Close [ 29.000] (II) UnloadModule: "evdev" [ 29.039] ddxSigGiveUp: Closing log Update I tried the following: Ctrl-Alt-F1 on the login screen (to runt the console). sudo pkill startx sudo rm /tmp/.X0-locl startx But it tells me that the x server is already running.

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • What are some good examples of using pass by name?

    - by Paul
    When I write programs I using pass by value or pass by reference always seem to be logical methods. When learning about different programming languages I came across pass by name. Pass by name is a parameter passing method that waits to evaluate the parameter value until it is used. See Stack Overflow pass by name question for more information on the method. What I would like to know is: what are some good examples and/or reasons to use pass by name and should it be re-introduced into some more modern languages.

    Read the article

  • Web Writing Services - When It's Time to Pass the Ball

    When it comes to making our online marketing campaigns a success, most of us would be better off hiring a variety of web writing services to help. After all, while we've all seen (and envied) the one-man-act extraordinaire, there isn't too many of us who haven't been victim to the frazzled, pressure-cooker feeling of having sole responsibility for our companies' successes either. Besides, who in their right mind would put that kind of pressure on themselves when outsourcing to a web writing service can be just as profitable?

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • Cannot pass the login screen after upgrade to 12.04 from 11.10

    - by joksan
    I just performed the upgrade from Ubuntu 11.10 to Ubuntu 12.04 LTS. The updater downloaded all packages but a few ones that the system said were already installed. When asked for it, I chose to install the new maintainer's version of grub. Now after that my system start up to the login page, some images won't load. Moreover, after I try to login, the graphical instance does not load, it just sits there showing the mouse pointer and nothing else.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >