Search Results

Search found 837 results on 34 pages for 'jim giercyk'.

Page 1/34 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Thoughts on the new JavaFX by Jim Connors

    - by Jacob Lehrbaum
    First, a brief editorial if I may.  The upcoming JavaFX 2.0 platform has been getting overwhelmingly positive reaction from the community so far.  While the public sentiment seems to be cautiously optimistic, I've heard nothing but positive reactions from everyone that I've spoken to about the platform.   In fact, many of the early adopters of JavaFX have told us directly that they are very encouraged about the direction the platform is taking.One such early adopter is Oracle's own Jim Connors.  As his day job, Jim is a principal sales consultant (basically an engineer that supports Oracle's sales efforts) in the New York area.  However, Jim also co-wrote a book with Jim Clarke and Eric Bruno on JavaFX and has spoken and conducted training sessions at events like the New York Java Developer Day, the Java Road Trip, and other events.In his thoughtful editorial, Jim discusses some of the reasons why he believes the new directions Oracle is taking JavaFX make sense, including:Better developer toolsLower barriers to adoption -> better accessibility to existing Java developersImproved performanceMore flexibility (ability to use other dynamic languages, etc)To read more about Jim's thoughts on the new JavaFX, check out his blog.  Or if you want to learn more about the JavaFX platform, pick up a copy of his book.  And if you still want to use JavaFX Script, you can check out Project Visage

    Read the article

  • Java Champion Jim Weaver on JavaFX

    - by Janice J. Heiss
    Hardly anyone knows more about JavaFX than Java Champion and Oracle’s JavaFX Evangelist, Jim Weaver, who will be leading two Hands on Labs on aspects of JavaFX at this year’s JavaOne: HOL11265 – “Playing to the Strengths of JavaFX and HTML5” (With Jeff Klamer - App Designer, Jeff Klamer Design) Wednesday, Oct 3, 3:00 PM - 5:00 PM - Hilton San Francisco - Franciscan A/B/C/D HOL3058 – “Custom JavaFX Controls” (With Gerrit Grunwald, Senior Software Engineer, Canoo Engineering AG; Bob Larsen, Consultant, Larsen Consulting; and Peter Vašenda, Software Engineer, Oracle) Tuesday, Oct 2, 12:30 PM - 2:30 PM - Hilton San Francisco - Franciscan A/B/C/D I caught up with Jim at JavaOne to ask him for a current snapshot of JavaFX. “In my opinion,” observed Weaver, “the most important thing happening with JavaFX is the ongoing improvement to rich-client Java application deployment. For example, JavaFX packaging tools now provide built-in support for self-contained application packages. A package may optionally contain the Java Runtime, and be distributed with a native installer (e.g., a DMG or EXE). This makes it easy for users to install JavaFX apps on their client machines, perhaps obtaining the apps from the Mac App Store, for example. Igor Nekrestyanov and Nancy Hildebrandt have written a comprehensive guide to JavaFX application deployment, the following section of which covers Self-Contained Application Packaging: http://docs.oracle.com/javafx/2/deployment/self-contained-packaging.htm#BCGIBBCI.“Igor also wrote a blog post titled, "7u10: JavaFX Packaging Tools Update," that covers improvements introduced so far in Java SE 7 update 10. Here's the URL to the blog post:https://blogs.oracle.com/talkingjavadeployment/entry/packaging_improvements_in_jdk_7”I asked about how the strengths of JavaFX and HTML5 interact and reinforce each other. “They interact and reinforce each other very well. I was about to be amazed at your insight in asking that question, but then recalled that one of my JavaOne sessions is a Hands-on Lab titled ‘Playing to the Strengths of JavaFX and HTML5.’ In that session, we'll cover the JavaFX and HTML5 WebView control, the strengths of each technology, and the various ways that Java and contents of the WebView can interact.”And what is he looking forward to at JavaOne? “I'm personally looking forward to some excellent sessions, and connecting with colleagues and friends that I haven't seen in a while!” Jim Weaver is another good reason to feel good about JavaOne.

    Read the article

  • Technical Computing Initiative, Jim Gray and a Virtual Framed Letter on my Wall

    Today Microsoft announced their Technical Computing Initiative, a program to help scientists and engineers take advantage of the latest breakthroughs in parallel computing, bandwidth increases, and technologies that will make doing scientific research akin to using spreadsheets (as opposed to writing really complex custom code).  This is actually the culmination of work that the late Jim Gray, formerly a technical fellow at Microsoft, was working on. I didn't really know Jim, and frankly only...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Get to Know a Candidate (5 of 25): Jim Carlson–Grassroots Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Carlson is an American businessman and the Grassroots Party nominee. Carlson is the owner of Last Place on Earth, a head shop located in Duluth, Minnesota. In September 2011, the shop was raided by police for selling bath salts and synthetic marijuana. After the raid, Carlson filed a lawsuit to strike down Minnesota's ban on the substances. His suit was dismissed by the court in November 2011. The Grassroots Party was created in the 1980s to oppose drug prohibition.  The party shares many of the the political leftist values of the Green Party but with a greater emphasis on marijuana/hemp legalization issues.  The permanent platform of the Grassroots Party is the Bill of Rights. Individual candidate's positions on issues vary from Libertarian to Green. All Grassroots candidates would end marijuana/hemp prohibition and re-legalize Cannabis for all its uses. Learn more about Jim Carlson and Grassroots Party on Wikipedia.

    Read the article

  • Content Catalog for Oracle OpenWorld is Ready

    - by Rick Ramsey
    American Major League Baseball Umpire Jim Joyce made one of the worst calls in baseball history when he ruled Jason Donald safe at First in Wednesday's game between the Detroit Lions and the Cleveland Indians. The New York Times tells the story well. It was the 9th inning. There were two outs. And Detroit Tiger's pitcher Armando Galarraga had pitched a perfect game. Instead of becoming the 21st pitcher in Major League Baseball history to pitch a perfect game, Galarraga became the 10th pitcher in Major League Baseball history to ever lose a perfect game with two outs in the ninth inning. More insight from the New York Times here. You can avoid a similar mistake and its attendant death treats, hate mail, and self-loathing by studying the Content Catalog just released for Oracle Open World, Java One, and Oracle Develop conferences being held in San Francisco September 19-23. The Content Catalog displays all the available content related to the event, the venue, and the stream or track you're interested in. Additional filters are available to narrow down your results even more. It's simple to use and a big help. Give it a try. It'll spare you the fate of Jim Joyce. - Rick

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • curlftpfs mount disagrees with the fstab

    - by KayakJim
    I am working with curlftpfs to mount a remote FTP directory locally in Kubuntu 12.04 64-bit. I have the following entry in my /etc/fstab: curlftpfs#ftp_user:ftp_password@ftp_server /mnt/nimh fuse ro,noexec,nosuid,nodev,noauto,user,allow_other,uid=1000,gid=1000 0 0 I have created the directory in /mnt with the following: |-> ll /mnt total 4.0K drwxrwxr-x 2 jim fuse 4.0K Jan 6 09:56 nimh/ My user does belong to the fuse group as well: uid=1000(jim) gid=1000(jim) groups=1000(jim),27(sudo),105(fuse) I am able to mount manually without issue but then the /mnt changes to: |-> mount /mnt/nimh |-> ll /mnt total 0 drwxr-xr-x 1 jim jim 1.0K Dec 31 1969 nimh/ However when I attempt to umount /mnt/nimh I receive: umount: /mnt/nimh mount disagrees with the fstab My /etc/mtab looks like: curlftpfs#ftp://ftp_user:ftp_password@ftp_server/ /mnt/nimh fuse ro,noexec,nosuid,nodev,allow_other,user=jim 0 0 I am able to umount the filesystem without issue if I sudo. Any idea what I'm missing in order to be able to unmount without having to use sudo?

    Read the article

  • 2 Birds, 1 Stone: Enabling M2M and Mobility in Healthcare

    - by Eric Jensen
    Jim Connors has created a video showcase of a comprehensive healthcare solution, connecting a mobile application directly to an embedded patient monitoring system. In the demo, Jim illustrates how you can easily build solutions on top of the Java embedded platform, using Oracle products like Berkeley DB and Database Mobile Server. Jim is running Apache Tomcat on an embedded device, using Berkeley DB as the data store. BDB is transparently linked to an Oracle Database backend using  Database Mobile Server. Information protection is important in healthcare, so it is worth pointing out that these products offer strong data encryption, for storage as well as transit. In his video, Jim does a great job of demystifying M2M. What's compelling about this demo is that uses a solution architecture that enterprise developers are already comfortable and familiar with: a Java apps server with a database backend. The additional pieces used to embed this solution are Oracle Berkeley DB and Database Mobile Server. It functions transparently, from the perspective of Java apps developers. This means that organizations who understand Java apps (basically everyone) can use this technology to develop embedded M2M products. The potential uses for this technology in healthcare alone are immense; any device that measures and records some aspect of the patient could be linked, securely and directly, to the medical records database. Breathing, circulation, other vitals, sensory perception, blood tests, x-rats or CAT scans. The list goes on and on. In this demo case, it's a testament to the power of the Java embedded platform that they are able to easily interface the device, called a Pulse Oximeter, with the web application. If Jim had stopped there, it would've been a cool demo. But he didn't; he actually saved the most awesome part for the end! At 9:52 Jim drops a bombshell: He's also created an Android app, something a doctor would use to view patient health data from his mobile device. The mobile app is seamlessly integrated into the rest of the system, using the device agent from Oracle's Database Mobile Server. In doing so, Jim has really showcased the full power of this solution: the ability to build M2M solutions that integrate seamlessly with mobile applications. In closing, I want to point out that this is not a hypothetical demo using beta or even v1.0 products. Everything in Jim's demo is available today. What's more, every product shown is mature, and already in production at many customer sites, albeit not in the innovative combination Jim has come up with. If your customers are in the market for these type of solutions (and they almost certainly are) I encourage you to download the components and try it out yourself! All the Oracle products showcased in this video are available for evaluation download via Oracle Technology Network.

    Read the article

  • Interesting SQL Sorting Issue

    - by rofly
    It's crunch time, deadline for my most recent contract is coming in two days and almost everything is complete and working fine (knock on wood) except for one issue. In one of my stored procedures, I'm needing to return a result set as follows. group_id | name A101 | Craig A102 | Craig Z101 | Craig Z102 | Craig A101 | Jim A102 | Jim Z101 | Jim Z102 | Jim B101 | Andy B102 | Andy Z101 | Andy Z102 | Andy The names need to be sorted by the first character of the group id and also include the Z101/Z102 entries. By sorting strictly by the group id, I get a result set as follows: A101 | Craig A102 | Craig A101 | Jim A102 | Jim B101 | Andy B102 | Andy Z101 | Andy Z102 | Andy Z101 | Jim Z102 | Jim I really can't think of a solution that doesn't involve me making a cursor and bloating the stored procedure up more than it already is. I'm sure a great mind out there has an elegant solution and I'm eager to see what the community can come up with. Thanks a ton in advance.

    Read the article

  • Configuration problems with django and mod_wsgi

    - by Jimbo
    Hi, I've got problems on getting django to work on apache 2.2 with mod_wsgi. Django is installed and mod_wsgi too. I can even see a 404 page when accessing the path and I can login to django admin. But if I want to install the tagging module I get the following error: Traceback (most recent call last): File "setup.py", line 49, in <module> version_tuple = __import__('tagging').VERSION File "/home/jim/django-tagging/tagging/__init__.py", line 3, in <module> from tagging.managers import ModelTaggedItemManager, TagDescriptor File "/home/jim/django-tagging/tagging/managers.py", line 5, in <module> from django.contrib.contenttypes.models import ContentType File "/usr/lib/python2.5/site-packages/django/contrib/contenttypes/models.py", line 1, in <module> from django.db import models File "/usr/lib/python2.5/site-packages/django/db/__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "/usr/lib/python2.5/site-packages/django/utils/functional.py", line 269, in __getattr__ self._setup() File "/usr/lib/python2.5/site-packages/django/conf/__init__.py", line 40, in _setup self._wrapped = Settings(settings_module) File "/usr/lib/python2.5/site-packages/django/conf/__init__.py", line 75, in __init__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings My httpd.conf: Alias /media/ /home/jim/django/mysite/media/ <Directory /home/jim/django/mysite/media> Order deny,allow Allow from all </Directory> Alias /admin/media/ "/usr/lib/python2.5/site-packages/django/contrib/admin/media/" <Directory "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /dj /home/jim/django/mysite/apache/django.wsgi <Directory /home/jim/django/mysite/apache> Order deny,allow Allow from all </Directory> My django.wsgi: import sys, os sys.path.append('/home/jim/django') sys.path.append('/home/jim/django/mysite') os.chdir('/home/jim/django/mysite') os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I try to get this to work since a few days and have read several blogs and answers here on so but nothing worked.

    Read the article

  • How to create generic class which takes 3 types.

    - by scope-creep
    I'm trying to make a generic class that takes 3 types, either a simple string, IList or a IList. public class OntologyStore { } public sealed class jim<T> where T:new() { IList<string> X = null; IList<OntologyStore> X1 = null; public bob() { if (typeof(T) == typeof(String)) { X = new List<string>(); } if (typeof(T) == typeof(OntologyStore)) { X1 = new List<OntologyStore>(); } } } I can easily create, which you would expect to work, jim<OntologyStore> x1=new jim<jim<OntologyStore>() as you would expect, but when I put in jim<string> x2=new jim<string>() the compiler reports the string is non abtract type, which you would expect. Is it possible to create a generic class, which can instantiate as a class which holds string, or a IList or an IList?

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • SQLUniversity Professional Development Week: Learning To Fly

    - by andyleonard
    Introduction Clem and Jim Bob were out hunting the other day in the woods south of Farmville. As they crossed a ridge, they came upon a big ol' Momma Bear and her cub. The larger bear immediately started towards them. Jim Bob took off running as fast as he could. He stopped when he realized Clem wasn't with him. And when he saw Clem reaching into his pack, Jim Bob was incredulous: "Hurry Clem! That bar's comin' fast! You need to out run 'er!" Clem kicked off his boots and pulled running shoes out...(read more)

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Finding all IP ranges blelonging to a specific ISP

    - by Jim Jim
    I'm having an issue with a certain individual who keeps scraping my site in an aggressive manner; wasting bandwidth and CPU resources. I've already implemented a system which tails my web server access logs, adds each new IP to a database, keeps track of the number of requests made from that IP, and then, if the same IP goes over a certain threshold of requests within a certain time period, it's blocked via iptables. It may sound elaborate, but as far as I know, there exists no pre-made solution designed to limit a certain IP to a certain amount of bandwidth/requests. This works fine for most crawlers, but an extremely persistent individual is getting a new IP from his/her ISP pool each time they're blocked. I would like to block the ISP entirely, but don't know how to go about it. Doing a whois on a few sample IPs, I can see that they all share the same "netname", "mnt-by", and "origin/AS". Is there a way I can query the ARIN/RIPE database for all subnets using the same mnt-by/AS/netname? If not, how else could I go about getting every IP belonging to this ISP? Thanks.

    Read the article

  • Looking under the hood of SSRS

    - by Jim Giercyk
    SSRS is a powerful tool, but there is very little available to measure it’s performance or view the SSRS execution log or catalog in detail.  Here are a few simple queries that will give you insight to the system that you never had before.   ACTIVE REPORTS:  Have you ever seen your SQL Server performance take a nose dive due to a long-running report?  If the SPID is executing under a generic Report ID, or it is a scheduled job, you may have no way to tell which report is killing your server.  Running this query will show you which reports are executing at a given time, and WHO is executing them.   USE ReportServerNative SELECT runningjobs.computername,             runningjobs.requestname,              runningjobs.startdate,             users.username,             Datediff(s,runningjobs.startdate, Getdate()) / 60 AS    'Active Minutes' FROM runningjobs INNER JOIN users ON runningjobs.userid = users.userid ORDER BY runningjobs.startdate               SSRS CATALOG:  We have all asked “What was the last thing that changed”, or better yet, “Who in the world did that!”.  Here is a query that will show all of the reports in your SSRS catalog, when they were created and changed, and by who.           USE ReportServerNative SELECT DISTINCT catalog.PATH,                            catalog.name,                            users.username AS [Created By],                             catalog.creationdate,                            users_1.username AS [Modified By],                            catalog.modifieddate FROM catalog         INNER JOIN users ON catalog.createdbyid = users.userid  INNER JOIN users AS users_1 ON catalog.modifiedbyid = users_1.userid INNER JOIN executionlogstorage ON catalog.itemid = executionlogstorage.reportid WHERE ( catalog.name <> '' )               SSRS EXECUTION LOG:  Sometimes we need to know what was happening on the SSRS report server at a given time in the past.  This query will help you do just that.  You will need to set the timestart and timeend in the WHERE clause to suit your needs.         USE ReportServerNative SELECT catalog.name AS report,        executionlogstorage.username AS [User],        executionlogstorage.timestart,        executionlogstorage.timeend,         Datediff(mi,e.timestart,e.timeend) AS ‘Time In Minutes',        catalog.modifieddate AS [Report Last Modified],        users.username FROM   catalog  (nolock)        INNER JOIN executionlogstorage e (nolock)          ON catalog.itemid = executionlogstorage.reportid        INNER JOIN users (nolock)          ON catalog.modifiedbyid = users.userid WHERE  executionlogstorage.timestart >= Dateadd(s, -1, '03/31/2012')        AND executionlogstorage.timeend <= Dateadd(DAY, 1, '04/02/2012')      LONG RUNNING REPORTS:  This query will show the longest running reports over a given time period.  Note that the “>5” in the WHERE clause sets the report threshold at 5 minutes, so anything that ran less than 5 minutes will not appear in the result set.  Adjust the threshold and start/end times to your liking.  With this information in hand, you can better optimize your system by tweaking the longest running reports first.         USE ReportServerNative SELECT executionlogstorage.instancename,        catalog.PATH,        catalog.name,        executionlogstorage.username,        executionlogstorage.timestart,        executionlogstorage.timeend,        Datediff(mi, e.timestart, e.timeend) AS 'Minutes',        executionlogstorage.timedataretrieval,        executionlogstorage.timeprocessing,        executionlogstorage.timerendering,        executionlogstorage.[RowCount],        users_1.username        AS createdby,        CONVERT(VARCHAR(10), catalog.creationdate, 101)        AS 'Creation Date',        users.username        AS modifiedby,        CONVERT(VARCHAR(10), catalog.modifieddate, 101)        AS 'Modified Date' FROM   executionlogstorage e         INNER JOIN catalog          ON executionlogstorage.reportid = catalog.itemid        INNER JOIN users          ON catalog.modifiedbyid = users.userid        INNER JOIN users AS users_1          ON catalog.createdbyid = users_1.userid WHERE  ( e.timestart > '03/31/2012' )        AND ( e.timestart <= '04/02/2012' )        AND  Datediff(mi, e.timestart, e.timeend) > 5        AND catalog.name <> '' ORDER  BY 'Minutes' DESC        I have used these queries to build SSRS reports that I can refer to quickly, and export to Excel if I need to report or quantify my findings.  I encourage you to look at the data in the ReportServerNative database on your report server to understand the queries and create some of your own.  For instance, you may want a query to determine which reports are using which shared data sources.  Work smarter, not harder!

    Read the article

  • Adding a DLL to the GAC in Windows 7

    - by Jim Giercyk
    I recently created a DLL and I wanted to reference it from a project I was developing in Visual Studio.  In previous versions of Windows, doing so was simply a matter of dropping the DLL file in the C:\Windows\assembly folder.  That would add the DLL to the Global Assembly Cache (GAC) and make it accessible in Visual Studio.  However, as is often the case, Window 7 is different.  Even if you have Administrator privileges on your machine, you still do not have permission to drop a file in the assembly folder.  Undaunted, I thought about using the old DOS command line utility gacutil.exe.  Microsoft developed the tool as part of the .Net framework, and it is available in the Windows SDK Framework Tools.  If you have never used gacutil.exe before, you can find out everything you ever wanted to know but were afraid to ask here: http://msdn.microsoft.com/en-us/library/ex0ss12c(v=vs.80).aspx .  Unfortunately, if you do not have the Windows SDK loaded on your development machine, you will need to install it to use gacutil, but it is relatively quick and painless, and the framework tools are very useful.  Look here for your latest SDK: http://www.microsoft.com/download/en/search.aspx?q=Windows%20SDK .   After installing the SDK, I tried installing my DLL to the GAC by running gacutil from a DOS command line: That’s odd.  Microsoft is shipping a tool that cannot be executed even with Administrator rights?  Let me stop here and say that I am by no means a Windows security expert, so I actually did contact my system administrators, and they were not sure how to fix the problem….there must be a super administrator access level, but it isn’t available to your average developer in my company.  The solution outlined here is working within the boundaries of a normal windows Administrator. So, now the hacker in me bubbles to the surface.  What if I were to create a simple BAT file containing the gacutil command?  It’s so crazy it just might work!  Ugh!  I was starting to think this would never work, but then I realized that simply executing a batch program did not change my level of access.  Typically in Windows 7, you would select the “Run As Administrator” option to temporarily act as an administrator for the purpose of executing a process.  However, that option is not available for BAT files run from the command line.  SOLUTION: Create a desktop shortcut to execute the BAT file, which in turn will execute the line command…..are you still with me?  I created a shortcut and pointed it to my batch file.  Theoretically, all I need to do now is right-click on the shortcut and select “Run As Administrator” and we’re good, right?  Well, kinda.  If you notice the syntax of my BAT file, the name of the DLL is passed in as a parameter.  Therefore, I either have to hard-code the file name in the BAT program (YUCK!!), or I can leave the parameter and drag the DLL file to the shortcut and drop it.  Sweet, drag-and-drop works for me…..but if I use the drag-and-drop method, there is no way for me to right-click and select “Run As Administrator”.  That is not a problem…..I simply have to adjust the properties of the shortcut I created and I am in business.  I Right-clicked on the shortcut and select “Properties”.  Under the “Shortcut” tab there is an “Advanced” button…..I clicked it. All I needed to do was check the “Run As Administrator” box: In summary, what I have done is create a BAT file to execute a command line utility, gacutil.exe.  Then, rather than executing the BAT file from the command line, I created a desktop shortcut to run it and set the shortcut properties to “Run As Administrator”.  This will effectively mean I am executing the command line utility with Administrator privileges.  Pretty sneaky. Now, when I drag the DLL file  over to the shortcut, it starts the BAT file and adds the DLL to the assembly cache.  I created another BAT file to remove a DLL from the GAC in case the need should arise.  The code for that is: Give it a try.  I can’t imagine why updating the GAC has been made into such a chore in Windows 7.  Hopefully there is a service pack in the works that will give developers the functionality they had in Windows XP, but in the meantime, this workaround is extremely useful.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • SSIS Debugging Tip: Using Data Viewers

    - by Jim Giercyk
    When you have an SSIS package error, it is often very helpful to see the data records that are causing the problem.  After all, if your input has 50,000 records and 1 of them has corrupt data, it can be a chore.  Your execution results will tell you which column contains the bad data, but not which record…..enter the Data Viewer. In this scenario I have created a truncation error.  The input length of [lastname] is 50, but the output table has a length of 15.  When it runs, at least one of the records causes the package to fail.     Now what?  We can tell from our execution results that there is a problem with [lastname], but we have no idea WHICH record?     Let’s identify the row that is actually causing the problem.  First, we grab the oft’ forgotten Row Count shape from our toolbar and connect it to the error output from our input query.  Remember that in order to intercept errors with the error output, you must redirect them.     The Row Count shape requires 1 integer variable.  For our purposes, we will not reference the variable, but it is still required in order for the package to run.  Typically we would use the variable to hold the number of rows in the table and refer back to it later in our process.  We are simply using the Row Count as a “Dead End” for errors.  I called my variable RowCounter.  To create a variable, with no shapes selected, right-click on the background and choose Variable.     Once we have setup the Row Count shape, we can right-click on the red line (error output) from the query, and select Data Viewers.  In the popup, we click the add button and we will see this:     There are other fancier options we can play with, but for now we just want to view the output in a grid.  WE select Grid, then click OK on all of the popup windows to shut them down.  We should now see a grid with a pair of glasses on the error output line.     So, we are ready to catch the error output in a grid and see that is causing the problem!  This time when we run the package, it does not fail because we directed the error to the Row Count.  We also get a popup window showing the error record in a grid.  If there were multiple errors we would see them all.     Indeed, the [lastname] column is longer than 15 characters.  Notice the last column in the grid, [Error Code – Description].  We knew this was a truncation error before we added the grid, but if you have worked with SSIS for any length of time, you know that some errors are much more obscure.  The description column can be very useful under those circumstances! Data viewers can be used any time we want to see the data that is actually in the pipeline;  they stop the package temporarily until we shut them.  Also remember that the Row Count shape can be used as a “Dead End”.  It is useful during development when we want to see the output from a dataflow, but don’t want to update a table or file with the data.  Data viewers are an invaluable tool for both development and debugging.  Just remember to REMOVE THEM before putting your package into production

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >