Search Results

Search found 30217 results on 1209 pages for 'database'.

Page 683/1209 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Fundtech’s Global PAYplus Achieves Oracle Exadata and Oracle Exalogic Optimized Status

    - by Javier Puerta
    Fundtech, a leader in global transaction banking solutions, has announced  that Global PAYplus® – Services Platform (GPP-SP) version 4 has achieved Oracle Exadata Optimized and Oracle Exalogic Optimized status. (Read full announcement here) "GPP-SP testing was done in the third quarter of 2012 in the Oracle Exastack Lab located in the Oracle Solution Center in Linlithgow, Scotland. It showed that an integrated solution can result in a highly streamlined installation, enabling reduced cost of evaluation, acquisition and ownership. Highlights of the transaction processing test are as follows: 9.3 million Mass Payments per hour 5.7 million Single Payments per hour The test found that the optimized combination of GPP-SP running on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud is able to increase transactions per second (TPS) output per core, and able to reduce total cost of ownership (TCO). The volumes achieved were using only 25% of Exadata/Exalogic processing capacity".

    Read the article

  • Installing SubText with Web PI

    - by Ben Griswold
    SubText is the engine behind our company blog. With the goal of ensuring a smooth transition between the main website and the blogs, I spent some time tightening up the styles for the aggregate and individual blogs last week.  This required a custom SubText skin and lot of css tweaking. Though I’ve previously had the SubText source running on my machine, there was no need to update or rebuild the solution in my current case so just went ahead with a local installation using the Microsoft Web Platform Installer (Web PI).  I just checked the SubText box, provided answers to a few key setup questions (admin user credentials, SubText database, etc) and I was up and running in minutes.   Once the setup was complete, I was asked if I’d like to launch SubText.  The SubText Installation Wizard picked up where Web PI left off and the setup couldn’t have been easier.  Web PI provides quick and easy installs for lots of goodies.  Check it out.

    Read the article

  • Coding standards

    - by Piotr Rodak
    This post will be about coding standards. There are countless articles and blog posts related to this topic, so I know this post will not be too revealing. Yet I would like to mention a few things I came across during my work with the T-SQL code. Naming convention - there are many of them obviously. Too bad if all of them are used in the same database, and sometimes even in the same stored procedure. It is not uncommon to see something like create procedure dbo . Proc1 ( @ParamId int ) as begin declare...(read more)

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Extending NerdDinner: Adding Geolocated Flair

    - by Jon Galloway
    NerdDinner is a website with the audacious goal of “Organizing the world’s nerds and helping them eat in packs.” Because nerds aren’t likely to socialize with others unless a website tells them to do it. Scott Hanselman showed off a lot of the cool features we’ve added to NerdDinner lately during his popular talk at MIX10, Beyond File | New Company: From Cheesy Sample to Social Platform. Did you miss it? Go ahead and watch it, I’ll wait. One of the features we wanted to add was flair. You know about flair, right? It’s a way to let folks who like your site show it off in their own site. For example, here’s my StackOverflow flair: Great! So how could we add some of this flair stuff to NerdDinner? What do we want to show? If we’re going to encourage our users to give up a bit of their beautiful website to show off a bit of ours, we need to think about what they’ll want to show. For instance, my StackOverflow flair is all about me, not StackOverflow. So how will this apply to NerdDinner? Since NerdDinner is all about organizing local dinners, in order for the flair to be useful it needs to make sense for the person viewing the web page. If someone visits from Egypt visits my blog, they should see information about NerdDinners in Egypt. That’s geolocation – localizing site content based on where the browser’s sitting, and it makes sense for flair as well as entire websites. So we’ll set up a simple little callout that prompts them to host a dinner in their area: Hopefully our flair works and there is a dinner near your viewers, so they’ll see another view which lists upcoming dinners near them: The Geolocation Part Generally website geolocation is done by mapping the requestor’s IP address to a geographic area. It’s not an exact science, but I’ve always found it to be pretty accurate. There are (at least) three ways to handle it: You pay somebody like MaxMind for a database (with regular updates) that sits on your server, and you use their API to do lookups. I used this on a pretty big project a few years ago and it worked well. You use HTML 5 Geolocation API or Google Gears or some other browser based solution. I think those are cool (I use Google Gears a lot), but they’re both in flux right now and I don’t think either has a wide enough of an install base yet to rely on them. You might want to, but I’ve heard you do all kinds of crazy stuff, and sometimes it gets you in trouble. I don’t mean talk out of line, but we all laugh behind your back a bit. But, hey, it’s up to you. It’s your flair or whatever. There are some free webservices out there that will take an IP address and give you location information. Easy, and works for everyone. That’s what we’re doing. I looked at a few different services and settled on IPInfoDB. It’s free, has a great API, and even returns JSON, which is handy for Javascript use. The IP query is pretty simple. We hit a URL like this: http://ipinfodb.com/ip_query.php?ip=74.125.45.100&timezone=false … and we get an XML response back like this… <?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>74.125.45.100</Ip> <Status>OK</Status> <CountryCode>US</CountryCode> <CountryName>United States</CountryName> <RegionCode>06</RegionCode> <RegionName>California</RegionName> <City>Mountain View</City> <ZipPostalCode>94043</ZipPostalCode> <Latitude>37.4192</Latitude> <Longitude>-122.057</Longitude> </Response> So we’ll build some data transfer classes to hold the location information, like this: public class LocationInfo { public string Country { get; set; } public string RegionName { get; set; } public string City { get; set; } public string ZipPostalCode { get; set; } public LatLong Position { get; set; } } public class LatLong { public float Lat { get; set; } public float Long { get; set; } } And now hitting the service is pretty simple: public static LocationInfo HostIpToPlaceName(string ip) { string url = "http://ipinfodb.com/ip_query.php?ip={0}&timezone=false"; url = String.Format(url, ip); var result = XDocument.Load(url); var location = (from x in result.Descendants("Response") select new LocationInfo { City = (string)x.Element("City"), RegionName = (string)x.Element("RegionName"), Country = (string)x.Element("CountryName"), ZipPostalCode = (string)x.Element("CountryName"), Position = new LatLong { Lat = (float)x.Element("Latitude"), Long = (float)x.Element("Longitude") } }).First(); return location; } Getting The User’s IP Okay, but first we need the end user’s IP, and you’d think it would be as simple as reading the value from HttpContext: HttpContext.Current.Request.UserHostAddress But you’d be wrong. Sorry. UserHostAddress just wraps HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"], but that doesn’t get you the IP for users behind a proxy. That’s in another header, “HTTP_X_FORWARDED_FOR". So you can either hit a wrapper and then check a header, or just check two headers. I went for uniformity: string SourceIP = string.IsNullOrEmpty(Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; We’re almost set to wrap this up, but first let’s talk about our views. Yes, views, because we’ll have two. Selecting the View We wanted to make it easy for people to include the flair in their sites, so we looked around at how other people were doing this. The StackOverflow folks have a pretty good flair system, which allows you to include the flair in your site as either an IFRAME reference or a Javascript include. We’ll do both. We have a ServicesController to handle use of the site information outside of NerdDinner.com, so this fits in pretty well there. We’ll be displaying the same information for both HTML and Javascript flair, so we can use one Flair controller action which will return a different view depending on the requested format. Here’s our general flow for our controller action: Get the user’s IP Translate it to a location Grab the top three upcoming dinners that are near that location Select the view based on the format (defaulted to “html”) Return a FlairViewModel which contains the list of dinners and the location information public ActionResult Flair(string format = "html") { string SourceIP = string.IsNullOrEmpty( Request.ServerVariables["HTTP_X_FORWARDED_FOR"]) ? Request.ServerVariables["REMOTE_ADDR"] : Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; var location = GeolocationService.HostIpToPlaceName(SourceIP); var dinners = dinnerRepository. FindByLocation(location.Position.Lat, location.Position.Long). OrderByDescending(p => p.EventDate).Take(3); // Select the view we'll return. // Using a switch because we'll add in JSON and other formats later. string view; switch (format.ToLower()) { case "javascript": view = "JavascriptFlair"; break; default: view = "Flair"; break; } return View( view, new FlairViewModel { Dinners = dinners.ToList(), LocationName = string.IsNullOrEmpty(location.City) ? "you" : String.Format("{0}, {1}", location.City, location.RegionName) } ); } Note: I’m not in love with the logic here, but it seems like overkill to extract the switch statement away when we’ll probably just have two or three views. What do you think? The HTML View The HTML version of the view is pretty simple – the only thing of any real interest here is the use of an extension method to truncate strings that are would cause the titles to wrap. public static string Truncate(this string s, int maxLength) { if (string.IsNullOrEmpty(s) || maxLength <= 0) return string.Empty; else if (s.Length > maxLength) return s.Substring(0, maxLength) + "..."; else return s; }   So here’s how the HTML view ends up looking: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage<FlairViewModel>" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Nerd Dinner</title> <link href="/Content/Flair.css" rel="stylesheet" type="text/css" /> </head> <body> <div id="nd-wrapper"> <h2 id="nd-header">NerdDinner.com</h2> <div id="nd-outer"> <% if (Model.Dinners.Count == 0) { %> <div id="nd-bummer"> Looks like there's no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target="_blank" href="http://www.nerddinner.com/Dinners/Create">host one</a>?</div> <% } else { %> <h3> Dinners Near You</h3> <ul> <% foreach (var item in Model.Dinners) { %> <li> <%: Html.ActionLink(String.Format("{0} with {1} on {2}", item.Title.Truncate(20), item.HostedBy, item.EventDate.ToShortDateString()), "Details", "Dinners", new { id = item.DinnerID }, new { target = "_blank" })%></li> <% } %> </ul> <% } %> <div id="nd-footer"> More dinners and fun at <a target="_blank" href="http://nrddnr.com">http://nrddnr.com</a></div> </div> </div> </body> </html> You’d include this in a page using an IFRAME, like this: <IFRAME height=230 marginHeight=0 src="http://nerddinner.com/services/flair" frameBorder=0 width=160 marginWidth=0 scrolling=no></IFRAME> The Javascript view The Javascript flair is written so you can include it in a webpage with a simple script include, like this: <script type="text/javascript" src="http://nerddinner.com/services/flair?format=javascript"></script> The goal of this view is very similar to the HTML embed view, with a few exceptions: We’re creating a script element and adding it to the head of the document, which will then document.write out the content. Note that you have to consider if your users will actually have a <head> element in their documents, but for website flair use cases I think that’s a safe bet. Since the content is being added to the existing page rather than shown in an IFRAME, all links need to be absolute. That means we can’t use Html.ActionLink, since it generates relative routes. We need to escape everything since it’s being written out as strings. We need to set the content type to application/x-javascript. The easiest way to do that is to use the <%@ Page ContentType%> directive. <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<NerdDinner.Models.FlairViewModel>" ContentType="application/x-javascript" %> <%@ Import Namespace="NerdDinner.Helpers" %> <%@ Import Namespace="NerdDinner.Models" %> document.write('<script>var link = document.createElement(\"link\");link.href = \"http://nerddinner.com/content/Flair.css\";link.rel = \"stylesheet\";link.type = \"text/css\";var head = document.getElementsByTagName(\"head\")[0];head.appendChild(link);</script>'); document.write('<div id=\"nd-wrapper\"><h2 id=\"nd-header\">NerdDinner.com</h2><div id=\"nd-outer\">'); <% if (Model.Dinners.Count == 0) { %> document.write('<div id=\"nd-bummer\">Looks like there\'s no Nerd Dinners near <%:Model.LocationName %> in the near future. Why not <a target=\"_blank\" href=\"http://www.nerddinner.com/Dinners/Create\">host one</a>?</div>'); <% } else { %> document.write('<h3> Dinners Near You</h3><ul>'); <% foreach (var item in Model.Dinners) { %> document.write('<li><a target=\"_blank\" href=\"http://nrddnr.com/<%: item.DinnerID %>\"><%: item.Title.Truncate(20) %> with <%: item.HostedBy %> on <%: item.EventDate.ToShortDateString() %></a></li>'); <% } %> document.write('</ul>'); <% } %> document.write('<div id=\"nd-footer\"> More dinners and fun at <a target=\"_blank\" href=\"http://nrddnr.com\">http://nrddnr.com</a></div></div></div>'); Getting IP’s for Testing There are a variety of online services that will translate a location to an IP, which were handy for testing these out. I found http://www.itouchmap.com/latlong.html to be most useful, but I’m open to suggestions if you know of something better. Next steps I think the next step here is to minimize load – you know, in case people start actually using this flair. There are two places to think about – the NerdDinner.com servers, and the services we’re using for Geolocation. I usually think about caching as a first attack on server load, but that’s less helpful here since every user will have a different IP. Instead, I’d look at taking advantage of Asynchronous Controller Actions, a cool new feature in ASP.NET MVC 2. Async Actions let you call a potentially long-running webservice without tying up a thread on the server while waiting for the response. There’s some good info on that in the MSDN documentation, and Dino Esposito wrote a great article on Asynchronous ASP.NET Pages in the April 2010 issue of MSDN Magazine. But let’s think of the children, shall we? What about ipinfodb.com? Well, they don’t have specific daily limits, but they do throttle you if you put a lot of traffic on them. From their FAQ: We do not have a specific daily limit but queries that are at a rate faster than 2 per second will be put in "queue". If you stay below 2 queries/second everything will be normal. If you go over the limit, you will still get an answer for all queries but they will be slowed down to about 1 per second. This should not affect most users but for high volume websites, you can either use our IP database on your server or we can whitelist your IP for 5$/month (simply use the donate form and leave a comment with your server IP). Good programming practices such as not querying our API for all page views (you can store the data in a cookie or a database) will also help not reaching the limit. So the first step there is to save the geolocalization information in a time-limited cookie, which will allow us to look up the local dinners immediately without having to hit the geolocation service.

    Read the article

  • Webcast MSDN: Introducción a páginas Web ASP.NET con Razor Syntax

    - by carlone
    Estimados Amigo@s: Mañana tendré el gusto de estar compartiendo nuevamente con ustedes un webcast. Estan invitados:   Id. de evento: 1032487341 Moderador(es): Carlos Augusto Lone Saenz. Idiomas: Español. Productos: Microsoft ASP.NET y Microsoft SQL Server. Público: Programador/desarrollador de programas. Venga y aprenda en esta sesión, sobre el nuevo modelo de programación simplificado, nueva sintaxis y ayudantes para web que componen las páginas Web ASP.NET con 'Razor'. Esta nueva forma de construir aplicaciones ASP.NET se dirige directamente a los nuevos desarrolladores de la plataforma. NET y desarrolladores, tratando de crear aplicaciones web rápidamente. También se incluye SQL Compact, embedded database que es xcopy de implementar. Vamos a mostrar una nueva funcionalidad que se ha agregado recientemente, incluyendo un package manager que hace algo fácil el agregar bibliotecas de terceros para sus aplicaciones. Registrarse aqui: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032487341&Culture=es-AR

    Read the article

  • Design Book–Fourth(last) Section (Physical Abstraction Optimization)

    - by drsql
    In this last section of the book, we will shift focus to the physical abstraction layer optimization. By this I mean the little bits and pieces of the design that is specifically there for performance and are actually part of the relational engine (read: the part of the SQL Server experience that ideally is hidden from you completely, but in 2010 reality it isn’t quite so yet.  This includes all of the data structures like database, files, etc; the optimizer; some coding, etc. In my mind, this...(read more)

    Read the article

  • Megjelent a Berkeley DB 11gR2 verziója

    - by Lajos Sárecz
    Kedden jelent meg az Oracle Berkeley DB legújabb, 11gR2 verziója. A Berkeley DB a piacvezeto nyílt forráskódú beágyazható adatbázis-kezelo. Mivel a Berkeley DB egy library formájában érheto el, így közvetlenül az alkalmazásba linkelheto, ennek köszönheto a rendkívül nagy teljesítmény és a zéró adminisztráció igény. Az új verzió újdonságai: - SQLite támogatás - JDBC és ODBC kapcsolat támogatása - Android platform támogatása A közelmúltban írtam az Oracle Lite új verziójáról is, amely ugyancsak támogatja az SQLite-ot. Nem véletlen a hasonlóság, szándékos cél volt a fejlesztok részérol hogy mostantól az Oracle Database Lite Mobile Server egyszerubben szinkronizálható lesz Oracle Berkeley DB mobil alkalmazásokkal. Az új verzió 2010 március 31-tol lesz letöltheto.

    Read the article

  • SyncToBlog #12 Windows Azure and Cloud Links

    - by Eric Nelson
    Some more “syncing to paper” :) Steve Marx wrote a very interesting article about using Hosted Web Core in an Azure Worker Role. Hosted Web Core is a new feature in IIS 7 that enables developers to create applications that load the core IIS functionality. Wade Wegner is a new Technical Evangelist for Windows Azure platform AppFabric Example from Wade (and how I found him) Host WCF Services in IIS with Service Bus Endpoints Google and vmware “get engaged” over cloud http://googlecode.blogspot.com/2010/05/enabling-cloud-portability-with-google.html A new cloud comparison site – slick but limited coverage (it is not at Azure level, rather BPOS level) www.cloudhypermarket.com  The Rise of NoSQL Database (devx free registration required) Moe Khosravy talks about Codename "Dallas"  to my colleague David G (14min video) New videos Calculating the cost of Azure and Calculating the cost of SQL Azure Related Links: Previous SyncToBlog posts My delicious bookmarks

    Read the article

  • Open Source Survey: Oracle Products on Top

    - by trond-arne.undheim
    Oracle continues to work with the open source community to bring the most innovative and productive software to market (more). Oracle products received the most votes in several key categories of the 2010 Linux Journal Reader's Choice Awards. With over 12,000 technologists reporting, these product earned top spots: Best Office Suite: OpenOffice.org Best Single Office Program: OpenOffice.org Writer Best Database: MySQL Best Virtualization Solution: VirtualBox "As the leading open source technology and service provider, Oracle continues to work with the community stakeholders to rapidly innovate many open source products for use in fully tested production environments," says Edward Screven, Oracle's chief corporate architect. "Supporting open source is important to Oracle and our customers, and we continue to invest in it." According to a recent report by the Linux Foundation, Oracle is one of the top ten contributors to the Linux Kernel. Oracle also contributes millions of lines of code to these important projects: OpenJDK: 7,002,579 Eclipse: 1,800,000 (#3 in active committers) MySQL: 5,073,113 NetBeans: 7,870,446 JSF: 701,980 Apache MyFaces Trinidad: 1,316,840 Hudson: 1,209,779 OpenOffice.org: 7,500,000

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • How to create a new Team Project Collection in TFS2010:

    - by jehan
    TFS 2010 has introduced the notion of Team Project Collection (TPC).  I have already discussed about TPC in my earlier post, you can check it out here. In this post, I will demonstrate how to create a new Team Project Collection in TFS2010. First, you have to open the TFS Administration Console (Start à All Programs à Microsoft Team Foundation Server 2010 à Team Foundation Server Administration Console), expand the Application Tier node in TFS Administration Console and click on Team Project Collection. Here you will see the TPC’s which are already exist, I am having only one TPC named New Collection and I’m going to create a new TPC called Demo Collection. To create a new Team Project Collection, you need to click on Create Collection; it will open the Create New Team Project Collection window.     Under the Name tab, you have to enter the name of Collection which you want to give for your new TPC (I naming it as Demo Collection). You can also provide some description about your TPC in Description tab which is optional and click next. Here, you need to enter the name of SQL Server Instance where you want your new TPC data to reside. You have the option either to choose the creating a Database for this TPC or use the already existing empty database and then click next.   In next screen, you have to choose SharePoint configuration. Here you have the options to either configure SharePoint Site for TPC at default collections or you can specify the your existing SharePoint site and  you can also choose not  to configure the SharePoint for this collection, if you choose last option then you cannot configure the Share Point sites for the all the Team Projects under this Project Collection. You also have the flexibility to create a Share Point site for this TPC later on, then if you need you have to configure SharePoint site for the existing team projects manually.   In next screen, you will have the Reports configuration. Here you have the options to either configure the Reports for TPC at default path or you can specify the path for at existing Reports folder, you can also choose not to configure the Reports for this collection, if you choose last option then you cannot create  the Reports  for the all the Team Projects under this Project Collection. Here also you can enable reporting for this TPC later on. The next screen is related to Lab Management Configuration, Lab Management is the new feature in TFS2010 which enables the users to create and manage virtual test environments where you can deploy and test your application. There are no options available here as I don’t have the Lab Management configured for my Team Foundation Server. The next screen is Review Configuration window, which will show up all the configuration settings you have specified, so that you can review the configurations before creating the Team Project Collection. If you want to make any changes to the configurations then you can go back to the previous windows and can make the changes. After Reviewing the configuration settings, you can click on verify button. Which will verify that if you’re Team Project Collection is ready to be created or not, it will show up the errors and warning (if any) which can make your Team Project Collection fail. You can then choose to create the Team Project Collection if the verify option doesn’t throw any warnings and errors. If the verify option throws any errors, then it is strongly suggested that you have to first rectify the issues then only go for TPC creation especially in case of warnings as it is a common practice to overlook the warnings.   If you choose the create TPC option, then it will start the process of creating a Team Project Collection  and once its completed you can check the status of configuration different components  during Team Project Collection. You can see in below screen that all the components are configured successfully.   In next screen, you can find the location of log file created for this Team Project Creation, this log file is really important in case of Team Project creation failure because it will help you to find  the root cause for the failure. Now, you can see that the New Team Projection (Demo Collection) which was created is now available in Team Foundation Collection tab and its status is Online.   You can now try to connect to this Team Project Collection from Team Explorer. Choose the newly created Team Project Collection and click on connect.     This Team Project Collection is empty because no Team Projects are created yet. Now, you can create the new Team Projects and start working.

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • LINQ for SQL Developers and DBA’s

    - by AtulThakor
    Firstly I’d just like to thank the guys who organise the SQL Server User Group (Martin/Tony/Chris) and for giving me the opportunity to speak at the recent event. Sorry about the slides taking so long but here they are along with some extra information. Firstly the demo’s were all done using LINQPad 4.0 which can be downloaded here: http://www.linqpad.net/ There are 2 versions 3.5/4.0 With 3.5 you should be able to replicate the problem I showed where a query using a parameter which is X characters long would create a different execution plan to a query which uses a parameter which is Y characters long, otherwise I would just use 4.0 The sample database used is AdventureWorksLT2008 which can be downloaded from here: http://msftdbprodsamples.codeplex.com/releases/view/37109 The scripts have been named so that you can select the appropriate way to run them i.e.: C# expression / C#statement, each script can be run individually be highlighting the query and clicking the play symbol or hitting F5. Scripts and Slides: http://sqlblogcasts.com/blogs/atulthakor/An%20Introduction%20to%20LINQ.zip Please don't hesitate in sending any questions via email/twitter, I’ll try my best to answer your questions! Thanks, Atul

    Read the article

  • Oracle for PCI-DSS Security Webcast

    - by Alex Blyth
    Thanks to everyone who attended the Oracle for PCI-DSS security webcast today. It was good to see how the products we talked about last week can be used to address the PCI standard requirements. A big thanks to Chris Pickett for presenting a great session and running us through a very cool demo showing how the data is protected through out its life. The replay of the session can be downloaded here. Slides and be down loaded here. Oracle for PCI-DSS Security Compliance View more presentationsfrom Oracle Australia. Next week we resume our regular schedule with Andrew Clarke taking us through Oracle Application Express (APEX) - one of the best kept secrets in the Oracle Database. Enroll for this session here (and now :) ) Till next week Cheers Alex

    Read the article

  • BFCM &ndash; Big Fat Check Mark

    - by onefloridacoder
    I was installing TFS on my local laptop last week and just got around to setting up my initial collection using the TFS Console tool and “Bang!”  I received a message that told me that my local database didn’t have the full-text search option installed.  I remember the option in a (long) list of options and didn’t remember fiddling with it.   Whatever the reason, if you are installing TFS Basic on your box, make sure you have that little check ticked, or you won’t get the big fat one pictured above.  I installed SQL 2008 Developer edition which worked well for what I needed so far, and just needed to run the “Add Feature” option instead of the “Repair” option. HTH

    Read the article

  • Does code-generation increase the code quality?

    - by platzhirsch
    Arguing for code-generation I am looking for some reasons, if howsoever, code generation increases the code quality, respectively is in favor for quality insurance. To clarify what I mean with code-generation I can talk only about a project of mine: We use XML files to describe different relationships, in fact our database schema. These XML files are used to generate our ORM framework and HTML forms which can be used to add, delete and modify entities. To my mind, it increases the quality, as the human error is reduced. If someone was implemented wrong, it is broken in the model. This is good, because the error might appear a lot faster, as more generated code is broken, too.

    Read the article

  • Crystal Reports: 5 Tests for Top Performance

    Your masterpiece report is now complete. It doesn't just meet your customer’s expectations, it blows them out of the water. All they want is a beautifully-summarized report that can be displayed in a myriad of ways. Then disaster strikes! You try to run the report for a month against the live database and not the two days worth of test data you used for development, then your report’s runtime goes from twenty seconds to two hours. Every Crystal Reports developer has experienced this situation and it can be one of the most frustrating aspects of report design. Thankfully there are a variety of things that can be done to combat bad performance, any one of which can reap huge benefits...

    Read the article

  • Silverlight Cream Top Posted Authors June to November, 2010

    - by Dave Campbell
    It's just past the first of December, but I've been busy and it's now time to recognize devs that have a large number of posts in Silverlight Cream. Ground Rules I pick what posts are on the blog Only posts that go in the database are included The author has to appear in SC at least 4 of the 6 months considered I averaged the monthly posts and am only showing Authors with an average greater than 1. Here are the Top Posted Authors at Silverlight Cream for June 1, 2010 through November 30, 2010: It is my intention to post a new list sometime shortly after the 1st of every month to recognize the top posted in the previous 6 months, so next up is January 1! Some other metrics for Silverlight Cream: At the time of this posting there are 7087 articles aggregated and searchable by partial Author, partial Title, keywords (in the synopsis), or partial URL. There are also 116 tags by which the articles can be searched. At the time of this posting there are 664 articles tagged wp7dev. Stay in the 'Light!

    Read the article

  • On oracle-validated RMP ... do it faster!

    - by [email protected]
    We all know that setting up a Linux host for installing RDBMS can be a really tedious task.Well I wasn't aware that there exists a pre-cooked RPM that installs all the required packages for a fresh install.Ill try to test it soon, in the meantime here you have for downloading it: ( OEL4 and OEL5)http://oss.oracle.com/el5/oracle-validated/http://oss.oracle.com/el4/oracle-validated/More information can be found at the My Oracle Support Note "Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server (Doc ID 728346.1)"Also, check this website http://linux.oracle.com/pls/apex/f?p=102:1:2252801871920376 with a list of Oracle Linux Validated Configurations including software, hardware, storage, and network components along with documented best practicesHope it helps!--L

    Read the article

  • Will TSQL become useless because of new ORMs? [closed]

    - by Saeed Neamati
    By introducing LINQ to SQL, I found myself and my .NET developer colleagues gradually moving from TSQL to C# to create queries on the database. Entity Framework made that shift almost permanent. Now it's nearly 2 years that I use LINQ to SQL and LINQ to Entities and haven't used TSQL that much. Yesterday, a colleague encountered a problem (he had to create a SP) and we went to help him. But we all found that our TSQL knowledge was diminished for sure, and a simple SP that seemed trivial to us 2 or 3 years ago, was a challenge to be solved yesterday. Thus it came to my mind that while TSQL's life is attached to SQL Server, and logically as long as SQL Server lives and doesn't change it's SQL language, TSQL would also live, practically it might die, and soon very few people might know it. Am I right? Do existence of ORMs like Entity Framework threaten TSQL's life and usability?

    Read the article

  • Oracle Security Webcast - today

    - by Alex Blyth
    Hi AllHere are the details for today's (12th May 2010) webcast on "Oracle Database Security"  -  beginning at 1.30pm (Sydney, Australia Time) :Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690429Conference Key: securityEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call)Audio details:NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613Meeting ID: 7914841Meeting Passcode: 12052010Talk to you all at 1.30CheersAlex

    Read the article

  • Script to establish SSH tunnel and then run another program that uses the tunnel

    - by Rob Hills
    I am running a GUI app (Gnucash) that connects to a remote Postgres database via a secure shell session. I can use the SSH -L command to tunnel a local port and then separately run Gnucash and this works fine. What I'd like to do is use a single shell script that sets up the tunnel and then calls Gnucash. Is that possible? If so, how do I do it? Currently, I run commands like the following in 2 separate terminal windows: ssh -L 5433:127.0.0.1:19097 [email protected] gnucash postgres://gnucash@localhost:5433/gnucash_db If I simply put both lines in a shell script, the first line drops me into the remote shell and the second line doesn't execute until I exit the remote shell. TIA, Rob Hills

    Read the article

  • Tom Kyte szeminárium Budapesten, 2010. ápr. 19-20.

    - by Fekete Zoltán
    Még lehet jelentkezni Tom Kyte 2010-es budapesti szemináriumára, az itt található linkeken keresztül: - 2010. április 19-20., Budapesten tantermi szeminárium keretében. Témák: The top 10 - no 11 - new features of Oracle; Database 11g; All about binding; Materialized Views, Caching; Effective Indexing; Storage Techniques; Reorganizing objects - when and how. ASK TOM!

    Read the article

  • links for 2010-03-30

    - by Bob Rhubart
    Antony Reynolds: How is Oracle SOA Suite 11g better than a lawn tractor? SOA author Antony Reynolds describes the correct order for cold-starting an Oracle SOA Suite 11g installation. (tags: otn oracle soasuite soa) Steven Chan: Business Continuity for EBS Using Oracle 11g Physical Standby DB Steven Chan reports shares links to two new documents covering the use of Oracle Data Guard to create physical standby databases for Oracle E-Business Suite environments. (tags: oracle otn ebusinesssuite database) @soatoday: Enterprise Architecture IS Arbitrary "Maybe my opinion is biased because I come from a Software background," says Oracle ACE Director Jordan Braunstein, "but I often think Enterprise Architecture is an Art that is trying to apply a Science." (tags: oracle otn oracleace entarch enterprisearchitecture)

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >