Search Results

Search found 7086 results on 284 pages for 'explain'.

Page 29/284 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Team Review @ TSUG

    - by dmckinstry
    In case you haven’t heard, JB Brown is going to be presenting online at the Team System User Group this Thursday.  This month’s presentation will explain how Team Review (freely available) can be used with Team Foundation Server 2005, 2008 and even 2010! Meeting Date: Thursday, March 18th, 2010 Time: 5:00PM Pacific {Add to Calendar} {Join Meeting}

    Read the article

  • Siebel 8.1.1 for Communication

    The latest release of Siebel CRM 8.1.1 includes many new features and enhancements for the Communications industry. In this webcast, you’ll hear from Brenda Harris, Principal Product Strategy Manager for Communications here at Oracle. She’ll explain how Siebel Communications 8.1.1 will help your communications company provide the most differentiated and personalized customer experience proven to increase customer loyalty and profitability levels.

    Read the article

  • Where can I find fellow developers who are looking for help to create a startup?

    - by Jeremy Child
    Where can I find fellow developers who are looking for help to create a startup? Are there any collaboration websites whereby people pitch ideas and a group of people 'join' the project in an effort to create some kind of prototype? Edit: @Vitor Braga - Seriously i'm not looking to make money, just finding developers who are interested in making a trivial little app with someone else. Edit: It may be worthy to explain that I live in a remote area of Australia.

    Read the article

  • I need a step-by-step Sample Programming Tutorial (book or website)

    - by Albert Y.
    Can anyone recommend a step-by-step programming tutorial (either book or website) where they walk you through designing a complex program and explain what they are coding & why? Language doesn't matter, but preferably something like Java, Python, C++, or C and not web based. I am a new programmer and I am looking for good examples that will teach me how to program something more complex than simple programs given in programming textbooks.

    Read the article

  • Explanation needed, for “Ask, don't tell” approach?

    - by the_naive
    I'm taking a course on design patterns in software engineering and here I'm trying to understand the good and the bad way of design relating to "coupling" and "cohesion". I could not understand the concept described in the following image. The example of code shown in the image is ambiguous to me, so I can't quite clearly get what exactly "Ask, don't tell!" approach mean. Could you please explain?

    Read the article

  • Writing A Transact SQL (TSQL) Procedure For SQL Server 2008 To Delete Rows From Table Safely

    In this post, we will show and explain a small TSQL Sql Server 2008 procedure that deletes all rows in a table that are older than some specified date.  That is, say the table has 10,000,000 rows in it the accumulated over the past 2 years.  Say you want to delete all but [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Website Editor control for WYSIWYG/regions

    - by Dan Smith
    For lack of a better title, let me try to explain further: I'm looking for a control that will allow me to have a library of "page elements" (such as a list of employees, or a photo gallery, or a contact form, etc) that could be dragged onto the page canvas. The page canvas could have pre-set regions/boxes where these items could be drug into, preventing the user from screwing up the pages layout. I'm looking for any pre-built commercial (or open-source with commercial use allowed) tools available like this.

    Read the article

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • The Best Internet Marketing Tool is a Keyword Tool

    The best internet marketing tool is a keyword tool. A keyword tool is really helpful if you are an internet marketing, and if you have dabbled with one before, then you'll know exactly how powerful they are and what they can be used for. This article will explain that the best internet marketing tool that you can use is the keyword tool.

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • Google+ Platform Office Hours for March 7th 2012: REST API Overview

    Google+ Platform Office Hours for March 7th 2012: REST API Overview We hold weekly Google+ Platform Office Hours using Hangouts On Air most Wednesdays from 11:30 am until 12:15 pm PST. This week we took a step back and looked at the Google+ platform's REST APIs. We explain what they're capable of and show you how to get started using them. Discuss this video on Google+: goo.gl Learn more about our Office Hours: developers.google.com From: GoogleDevelopers Views: 3299 44 ratings Time: 31:45 More in Science & Technology

    Read the article

  • KBACE Technologies Talks About Oracle Tutor Customer Benefits

    Maryellen Papelian, Education Practice Lead at KBACE Technologies and Russell Handley, Director of Tutor Marketing at Oracle explain what Oracle Tutor is, how customers use Oracle Tutor to document business processes, and how Oracle Tutor provides both content and software tools to easily create and maintain business process documents.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • How do I limit the game loop?

    - by user1758938
    How do I make a game update at a consistent speed? For example, this would loop too fast: while(true) { GetInput(); Render(); } This just wont work (hard to explain): while(true) { GetInput(); Render(); Sleep(16); } How do I sync my game to a certain framerate and still have input and functions going at a consistent rate?

    Read the article

  • Delegates in c#

    - by Jalpesh P. Vadgama
    I have used delegates in my programming since C# 2.0. But I have seen there are lots of confusion going on with delegates so I have decided to blog about it. In this blog I will explain about delegate basics and use of delegates in C#. What is delegate? We can say a delegate is a type safe function pointer which holds methods reference in object. As per MSDN it's a type that references to a method. So you can assign more than one methods to delegates with same parameter and same return type. Following is syntax for the delegate public delegate int Calculate(int a, int b); Here you can see the we have defined the delegate with two int parameter and integer parameter as return parameter. Now any method that matches this parameter can be assigned to above delegates. To understand the functionality of delegates let’s take a following simple example. using System; namespace Delegates { class Program { public delegate int CalculateNumber(int a, int b); static void Main(string[] args) { int a = 5; int b = 5; CalculateNumber addNumber = new CalculateNumber(AddNumber); Console.WriteLine(addNumber(5, 6)); Console.ReadLine(); } public static int AddNumber(int a, int b) { return a + b; } } } Here in the above code you can see that I have created a object of CalculateNumber delegate and I have assigned the AddNumber static method to it. Where you can see in ‘AddNumber’ static method will just return a sum of two numbers. After that I am calling method with the help of the delegates and printing out put to the console application. Now let’s run the application and following is the output as expected. That’s it. You can see the out put of delegates after adding a number. This delegates can be used in variety of scenarios. Like in web application we can use it to update one controls properties from another control’s action. Same you can also call a delegates whens some UI interaction done like button clicked. Hope you liked it. Stay tuned for more. In next post I am going to explain about multicast delegates. Till then happy programming.

    Read the article

  • SQL SERVER – Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: What’s the most interesting thing here is that as we go from row 1 to row 10, the value of the FIRST_VALUE() remains the same but the value of the LAST_VALUE is increasing. The reason behind this is that as we progress in every line – considering that line and all the other lines before it, the last value will be of the row where we are currently looking at. To fully understand this statement, see the following figure: This may be useful in some cases; but not always. However, when we use the same thing with PARTITION BY, the same query starts showing the result which can be easily used in analytical algorithms and needs. Let us have fun through the following query: Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Let us understand how PARTITION BY windows the resultset. I have used PARTITION BY SalesOrderID in my query. This will create small windows of the resultset from the original resultset and will follow the logic or FIRST_VALUE and LAST_VALUE in this resultset. Well, this is just an introduction to these functions. In the future blog posts we will go deeper to discuss the usage of these two functions. By the way, these functions can be applied over VARCHAR fields as well and are not limited to the numeric field only. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Looking into ASP.Net MVC 4.0 Mobile Development - part 2

    - by nikolaosk
    In this post I will be continuing my discussion on ASP.Net MVC 4.0 mobile development. You can have a look at my first post on the subject here . Make sure you read it and understand it well before you move one reading the remaining of this post. I will not be writing any code in this post. I will try to explain a few concepts related to the MVC 4.0 mobile functionality. In this post I will be looking into the Browser Overriding feature in ASP.Net MVC 4.0. By that I mean that we override the user agent for a given user session. This is very useful feature for people who visit a site through a device and they experience the mobile version of the site, but what they really want is the option to be able to switch to the desktop view. "Why they might want to do that?", you might wonder.Well first of all the users of our ASP.Net MVC 4.0 application will appreciate that they have the option to switch views while some others will think that they will enjoy more the contents of our website with the "desktop view" since the mobile device they view our site has a quite large display.  Obviously this is only one site. These are just different views that are rendered.To put it simply, browser overriding lets our application treat requests as if they were coming from a different browser rather than the one they are actually from. In order to do that programmatically we must have a look at the System.Web.WebPages namespace and the classes in it. Most specifically the class BrowserHelpers. Have a look at the picture below   In this class we see some extension methods for HttpContext class.These methods are called extensions-helpers methods and we use them to switch to one browser from another thus overriding the current/actual browser. These APIs have effect on layout,views and partial views and will not affect any other ASP.Net Request.Browser related functionality.The overridden browser is stored in a cookie. Let me explain what some of these methods do. SetOverriddenBrowser() -  let us set the user agent string to specific value GetOverriddenBrowser() -  let us get the overridden value ClearOverriddenBrowser() -  let us remove any overridden user agent for the current request   To recap, in our ASP.Net MVC 4.0 applications when our application is viewed in our mobile devices, we can have a link like "Desktop View" for all those who desperately want to see the site with in full desktop-browser version.We then can specify a browser type override. My controller class (snippet of code) that is responsible for handling the switching could be something like that. public class SwitchViewController : Controller{ public RedirectResult SwitchView(bool mobile, string returnUrl){if (Request.Browser.IsMobileDevice == mobile)HttpContext.ClearOverriddenBrowser();elseHttpContext.SetOverriddenBrowser(mobile ? BrowserOverride.Mobile : BrowserOverride.Desktop);return Redirect(returnUrl);}} Hope it helps!!!!

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for August 1, 2013

    - by OTN ArchBeat
    Performance Tuning – Systems Running BPEL Processes | Ravi Saraswathi and Jaswant Sing Ravi Saraswathi and Jaswant Singh, the authors of "Oracle SOA BPEL Process Manager 11gR1 - A Hands-on Tutorial" explain performance tuning of SOA composite applications for optimal performance and scalability. Steps to configure SAML 2.0 with Weblogic Server | Puneeth The blogger known only as Punteeth shares an illustrated technical post that will be of interest to those working with Oracle WebLogic and the Security Assertion Markup Language (SAML). Video: Planning and Getting Started - Developer PCs | Chris Muir Tune in to the latest episode of ADF Architecture TV to see Chris Muir explain why you don't have to buy the most expensive PCs in order to run JDeveloper. Key User Experience Design Principles for working with Big Data | John Fuller User Experience Designer John Fuller shares 6 core design principles for working with big data that focus on "helping people bring together a variety of data types in a fast and flexible way." Event: OTN Developer Day: ADF Mobile - Burlington, MA - Aug 28 Through six sessions, including a hands-on workshop, you'll learn a simpler way to leverage your existing skills to develop enterprise mobile applications using Oracle ADF Mobile. Registration is free, but seating is limited. Optimizing WebCenter Portal Mobile Delivery | Jeevan Joseph FMW solution architect Jeevan Joseph "walks you through identifying and analyzing some common WebCenter Portal performance bottlenecks related to page weight and describes a generic approach that can streamline your portal while improving the performance and response times." Customizing specific instances of a WebCenter task flow | Jeevan Joseph Fusion Middleware A-Team solution architect Jeevan Joseph strikes again with this article that explains "how to set up parameters on MDS customization so that it is applied only under certain conditions...making it possible to customize individual instances of task flows." Exalogic Virtual Tea Break Snippets – Modifying Memory, CPU and Storage on a vServer | Andrew Hopkinson FMW solution architect Andrew Hopkinson walks you through "the simple process of resizing the resources associated with an already existing Exalogic vServer." Oracle ADF Mobile Virtual Developer Day - Next Week | Shay Shmeltzer JDeveloper product team lead Shay Schmeltzer shares agenda information for the OTN Virtual Developer Day event covering Mobile Application Development for iOS and Android, coming up one week from today, on August 7, 2013, 9am PT/Noon ET/1pm BRT. What's New In Oracle Enterprise Pack for Eclipse 12.1.2.1.0? New features and updates on the newly-released Oracle Enterprise Pack for Eclipse 12.1.2.1.0, now available for download from OTN. IOUG Cloud Builders Unite | Jeff Erickson Check out this great Oracle Magazine article by Jeff Erickson about IOUG members organizing around their common interest in building private clouds. Thought for the Day "Stuff that's hidden and murky and ambiguous is scary because you don't know what it does." — Jerry Garcia (August 1, 1942 – August 9, 1995) Source: brainyquote.com

    Read the article

  • SQL SERVER – Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function

    - by pinaldave
    Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: A Puzzle – Swap Value of Column Without Case Statement,there were more than 50 solutions proposed in the comment. There were many creative solutions. I have mentioned my personal favorite (different ones) here: Solution of Puzzle – Swap Value of Column Without Case Statement. However, I received lots of questions regarding one of the Solution by SIJIN KUMAR V P. He has used the function SOUNDEX in his solution. The request was to explain how SOUNDEX and DIFFERENCE works. Well, there are pretty decent documentations provided over here SOUNDEX function and DIFFERENCE over on MSDN and if I attempt to explain this function I will end up writing the same details which are available on MSDN. Instead of writing theory, we will try to learn this function by using a couple of simple puzzles. You try to solve the puzzles using the MSDN and see if you can learn something very quickly. In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. The first character of the code is the first character of character_expression and the second through fourth characters of the code are numbers that represent the letters in the expression. Vowels incharacter_expression are ignored unless they are the first letter of the string. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. The return value ranges from 0 through 4: 0 indicates weak or no similarity, and 4 indicates strong similarity or the same values. Learning Puzzle 1: Now let us run following four queries and observe its output. SELECT SOUNDEX('SQLAuthority') SdxValue SELECT SOUNDEX('SLTR') SdxValue SELECT SOUNDEX('SaLaTaRa') SdxValue SELECT SOUNDEX('SaLaTaRaM') SdxValue When you look at the result set all the four values are same. The reason for all the values to be same is as for SQL Server SOUNDEX function all the four strings are similarly sounding string. Learning Puzzle 2: Now let us run following five queries and observe its output. SELECT DIFFERENCE (SOUNDEX('SLTR'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE (SOUNDEX('TH'),SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SQLAuthority',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR',SOUNDEX('SQLAuthority')) SELECT DIFFERENCE ('SLTR','SQLAuthority') When you look at the result set you will get the result in the ranges from 1 to 4. Here is how it works if your result is 0 which means absolutely not relevant to each other and if your result is 1 which means the results are relevant to each other. Have you ever used above two functions in your business need or on production server? If yes, would you please leave a comment with use cases. I believe it will be beneficial to everyone. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Google indexing wrong domain that does not work

    - by user174117
    Can anyone tell me why Google indexes a site http://0.3c.7aae.static.theplanet.com instead of the actual domain name (my domain was registered with webmaster tools). The aforementioned link was linking to my site, but has now become a broken link. Initially I tried 301 directs, but nothing worked. The hosting company cannot explain why. Leaving me wondering what to do? Any guidance would be appreciated. Thks

    Read the article

  • Learn Who Started that Trace with the Default Trace

    - by Jonathan Kehayias
    This is not Extended Event related but it came from a question on Twitter about how to tell who and from what machine a server side trace was created, and there is no way to explain this in 140 characters so here’s a blog post.  This information is tracked in the Default Trace and can be found by querying for EventClass 175 which is the Audit Server Alter Trace Event trace_event_id from sys.trace_events. select trace_event_id , name from sys . trace_events where name like '%trace%' To query...(read more)

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >