Search Results

Search found 4921 results on 197 pages for 'conditional execution'.

Page 183/197 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How to retrieve ID of button clicked within usercontrol on Asp.net page?

    - by Shawn Gilligan
    I have a page that I am working on that I'm linking multiple user controls to. The user control contains 3 buttons, an attach, clear and view button. When a user clicks on any control on the page, the resulting information is "dumped" into the last visible control on the page. <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="Default" MasterPageFile="DefaultPage.master" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="ajaxToolkit" %> <%@ Register tagName="FileHandler" src="FileHandling.ascx" tagPrefix="ucFile" %> <asp:Content ID="Content1" ContentPlaceHolderID="Main" Runat="Server"> <asp:UpdatePanel ID="upPanel" UpdateMode="Conditional" runat="server"> <ContentTemplate> <table> <tr> <td> <ucFile:FileHandler ID="fFile1" runat="server" /> </td> <td> <ucFile:FileHandler ID="fFile2" runat="server" /> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </asp:Content> All file handling and processing is handled within the control, with an event when the upload to the file server is complete via a file name that was generated. When either button is clicked, the file name is always stored internal to the control in the last control's text box. Control code: <table style="width: 50%;"> <tr style="white-space: nowrap;"> <td style="width: 1%;"> <asp:Label runat="server" ID="lblFile" /> </td> <td style="width: 20%;"> <asp:TextBox ID="txtFile" CssClass="backColor" runat="server" OnTextChanged="FileInformationChanged" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnUpload" CssClass="btn" Text="Attach" OnClick="UploadFile"/>--%> <input type="button" id="btnUpload" class="btn" tabindex="30" value="Attach" onclick="SetupUpload();" /> </td> <td style="width: 1%"> <%--<asp:Button runat="server" ID="btnClear" Text="Clear" CssClass="btn" OnClick="ClearTextValue"/>--%> <input type="button" id="btnClearFile" class="btn" value="Clear" onclick="document.getElementById('<%=txtFile.ClientID%>').value = '';document.getElementById('<%=hfFile.ClientID%>').value = '';" /> </td> <td style="width: 1%"> <a href="#here" onclick="ViewLink(document.getElementById('<%=hfFile.ClientID%>').value, '')">View</a> </td> <td style="width: 1%"> <asp:HiddenField ID="hfFile" runat="server" /> </td> </tr> </table> <script type="text/javascript"> var ItemPath = ""; function SetupUpload(File) { ItemPath = File; VersionAttach('<%=UploadPath%>', 'true'); } function UploadComplete(File) { document.getElementById('<%=txtFile.ClientID%>').value = File.substring(File.lastIndexOf("/") + 1); document.getElementById('<%=hfFile.ClientID%>').value = File; alert('<%=txtFile.Text %>'); alert('<%=ClientID %>') } function ViewLink(File, Alert) { if (File != "") { if (File.indexOf("../data/") != -1) { window.open(File, '_blank'); } else { window.open('../data/<%=UploadPath%>/' + File, '_blank'); } } else if (Alert == "") { alert('No file has been uploaded for this field.'); } } </script>

    Read the article

  • MySQLi Insert prepared statement via PHP

    - by Jimmy
    Howdie do, This is my first time dealing with MySQLi inserts. I had always used mysql and directly ran the queries. Apparently, not as secure as MySQLi. Anywho, I'm attempting to pass two variables into the database. For some reason my prepared statement keeps erroring out. I'm not sure if it's a syntax error, but the insert just won't work. I've updated the code to make the variables easier to read Also, the error is specifically, Error preparing statement. I've updated the code, but it's not a pHP error. It's a MySQL error as the script runs but fails at the execution of: if($stmt = $mysqli - prepare("INSERT INTO subnets (id, subnet, mask, sectionId, description, vrfId, masterSubnetId, allowRequests, vlanId, showName, permissions, pingSubnet, isFolder, editDate) VALUES ('', ?, ?, '1', '', '0', '0', '0', '0', '0', '{"3":"1","2":"2"}', '0', '0', 'NULL')")) { I'm going to enable MySQL error checking. I honestly didn't know about that. <?php error_reporting(E_ALL); function InsertIPs($decimnal,$cidr) { error_reporting(E_ALL); $mysqli = new mysqli("localhost","jeremysd_ips","","jeremysd_ips"); if(mysqli_connect_errno()) { echo "Connection Failed: " . mysqli_connect_errno(); exit(); } if($stmt = $mysqli -> prepare("INSERT INTO subnets (id, subnet, mask,sectionId,description,vrfld,masterSubnetId,allowRequests,vlanId,showName,permissions,pingSubnet,isFolder,editDate) VALUES ('',?,?,'1','','0','0','0','0','0', '{'3':'1','2':'2'}', '0', '0', NULL)")) { $stmt-> bind_param('ii',$decimnal,$cidr); if($stmt-> execute()) { echo "Row added successfully\n"; } else { $stmt->error; } $stmt -> close; } } else { echo 'Error preparing statement'; } $mysqli -> close(); } $SUBNETS = array ("2915483648 | 18"); foreach($SUBNETS as $Ip) { list($TempIP,$TempMask) = explode(' | ',$Ip); echo InsertIPs($Tempip,$Tempmask); } ?> This function isn't supposed to return anything. It's just supposed to perform the Insert. Any help would be GREATLY appreciated. I'm just not sure what I'm missing here

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • ASP.NET MVC 3 Hosting :: New Features in ASP.NET MVC 3

    - by mbridge
    Razor View Engine The Razor view engine is a new view engine option for ASP.NET MVC that supports the Razor templating syntax. The Razor syntax is a streamlined approach to HTML templating designed with the goal of being a code driven minimalist templating approach that builds on existing C#, VB.NET and HTML knowledge. The result of this approach is that Razor views are very lean and do not contain unnecessary constructs that get in the way of you and your code. ASP.NET MVC 3 Preview 1 only supports C# Razor views which use the .cshtml file extension. VB.NET support will be enabled in later releases of ASP.NET MVC 3. For more information and examples, see Introducing “Razor” – a new view engine for ASP.NET on Scott Guthrie’s blog. Dynamic View and ViewModel Properties A new dynamic View property is available in views, which provides access to the ViewData object using a simpler syntax. For example, imagine two items are added to the ViewData dictionary in the Index controller action using code like the following: public ActionResult Index() {          ViewData["Title"] = "The Title";          ViewData["Message"] = "Hello World!"; } Those properties can be accessed in the Index view using code like this: <h2>View.Title</h2> <p>View.Message</p> There is also a new dynamic ViewModel property in the Controller class that lets you add items to the ViewData dictionary using a simpler syntax. Using the previous controller example, the two values added to the ViewData dictionary can be rewritten using the following code: public ActionResult Index() {     ViewModel.Title = "The Title";     ViewModel.Message = "Hello World!"; } “Add View” Dialog Box Supports Multiple View Engines The Add View dialog box in Visual Studio includes extensibility hooks that allow it to support multiple view engines, as shown in the following figure: Service Location and Dependency Injection Support ASP.NET MVC 3 introduces improved support for applying Dependency Injection (DI) via Inversion of Control (IoC) containers. ASP.NET MVC 3 Preview 1 provides the following hooks for locating services and injecting dependencies: - Creating controller factories. - Creating controllers and setting dependencies. - Setting dependencies on view pages for both the Web Form view engine and the Razor view engine (for types that derive from ViewPage, ViewUserControl, ViewMasterPage, WebViewPage). - Setting dependencies on action filters. Using a Dependency Injection container is not required in order for ASP.NET MVC 3 to function properly. Global Filters ASP.NET MVC 3 allows you to register filters that apply globally to all controller action methods. Adding a filter to the global filters collection ensures that the filter runs for all controller requests. To register an action filter globally, you can make the following call in the Application_Start method in the Global.asax file: GlobalFilters.Filters.Add(new MyActionFilter()); The source of global action filters is abstracted by the new IFilterProvider interface, which can be registered manually or by using Dependency Injection. This allows you to provide your own source of action filters and choose at run time whether to apply a filter to an action in a particular request. New JsonValueProviderFactory Class The new JsonValueProviderFactory class allows action methods to receive JSON-encoded data and model-bind it to an action-method parameter. This is useful in scenarios such as client templating. Client templates enable you to format and display a single data item or set of data items by using a fragment of HTML. ASP.NET MVC 3 lets you connect client templates easily with an action method that both returns and receives JSON data. Support for .NET Framework 4 Validation Attributes and IvalidatableObject The ValidationAttribute class was improved in the .NET Framework 4 to enable richer support for validation. When you write a custom validation attribute, you can use a new IsValid overload that provides a ValidationContext instance. This instance provides information about the current validation context, such as what object is being validated. This change enables scenarios such as validating the current value based on another property of the model. The following example shows a sample custom attribute that ensures that the value of PropertyOne is always larger than the value of PropertyTwo: public class CompareValidationAttribute : ValidationAttribute {     protected override ValidationResult IsValid(object value,              ValidationContext validationContext) {         var model = validationContext.ObjectInstance as SomeModel;         if (model.PropertyOne > model.PropertyTwo) {            return ValidationResult.Success;         }         return new ValidationResult("PropertyOne must be larger than PropertyTwo");     } } Validation in ASP.NET MVC also supports the .NET Framework 4 IValidatableObject interface. This interface allows your model to perform model-level validation, as in the following example: public class SomeModel : IValidatableObject {     public int PropertyOne { get; set; }     public int PropertyTwo { get; set; }     public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) {         if (PropertyOne <= PropertyTwo) {            yield return new ValidationResult(                "PropertyOne must be larger than PropertyTwo");         }     } } New IClientValidatable Interface The new IClientValidatable interface allows the validation framework to discover at run time whether a validator has support for client validation. This interface is designed to be independent of the underlying implementation; therefore, where you implement the interface depends on the validation framework in use. For example, for the default data annotations-based validator, the interface would be applied on the validation attribute. Support for .NET Framework 4 Metadata Attributes ASP.NET MVC 3 now supports .NET Framework 4 metadata attributes such as DisplayAttribute. New IMetadataAware Interface The new IMetadataAware interface allows you to write attributes that simplify how you can contribute to the ModelMetadata creation process. Before this interface was available, you needed to write a custom metadata provider in order to have an attribute provide extra metadata. This interface is consumed by the AssociatedMetadataProvider class, so support for the IMetadataAware interface is automatically inherited by all classes that derive from that class (notably, the DataAnnotationsModelMetadataProvider class). New Action Result Types In ASP.NET MVC 3, the Controller class includes two new action result types and corresponding helper methods. HttpNotFoundResult Action The new HttpNotFoundResult action result is used to indicate that a resource requested by the current URL was not found. The status code is 404. This class derives from HttpStatusCodeResult. The Controller class includes an HttpNotFound method that returns an instance of this action result type, as shown in the following example: public ActionResult List(int id) {     if (id < 0) {                 return HttpNotFound();     }     return View(); } HttpStatusCodeResult Action The new HttpStatusCodeResult action result is used to set the response status code and description. Permanent Redirect The HttpRedirectResult class has a new Boolean Permanent property that is used to indicate whether a permanent redirect should occur. A permanent redirect uses the HTTP 301 status code. Corresponding to this change, the Controller class now has several methods for performing permanent redirects: - RedirectPermanent - RedirectToRoutePermanent - RedirectToActionPermanent These methods return an instance of HttpRedirectResult with the Permanent property set to true. Breaking Changes The order of execution for exception filters has changed for exception filters that have the same Order value. In ASP.NET MVC 2 and earlier, exception filters on the controller with the same Order as those on an action method were executed before the exception filters on the action method. This would typically be the case when exception filters were applied without a specified order Order value. In MVC 3, this order has been reversed in order to allow the most specific exception handler to execute first. As in earlier versions, if the Order property is explicitly specified, the filters are run in the specified order. Known Issues When you are editing a Razor view (CSHTML file), the Go To Controller menu item in Visual Studio will not be available, and there are no code snippets.

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • Thursday Community Keynote: "By the Community, For the Community"

    - by Janice J. Heiss
    Sharat Chander, JavaOne Community Chairperson, began Thursday's Community Keynote. As part of the morning’s theme of "By the Community, For the Community," Chander noted that 60% of the material at the 2012 JavaOne conference was presented by Java Community members. "So next year, when the call for papers starts, put-in your submissions," he urged.From there, Gary Frost, Principal Member of Technical Staff, AMD, expanded upon Sunday's Strategy Keynote exploration of Project Sumatra, an OpenJDK project targeted at bringing Java to heterogeneous computing platforms (which combine the CPU and the parallel processor of the GPU into a single piece of silicon). Sumatra entails enhancing the JVM to make maximum use of these advanced platforms. Within this development space, AMD created the Aparapi API, which converts Java bytecode into OpenCL for execution on such GPU devices. The Aparapi API was open sourced in September 2011.Whether it was zooming-in on a Mandelbrot set, "the game of life," or a swarm of 10,000 Dukes in a space-bound gravitational dance, Frost's demos, using an Aparapi/OpenCL implementation, produced stunningly faster display results. He indicated that the Java 9 timeframe is where they see Project Sumatra coming to ultimate fruition, employing the Lamdas of Java 8.Returning to the theme of the keynote, Donald Smith, Director, Java Product Management, Oracle, explored a mind map graphic demonstrating the importance of Community in terms of fostering innovation. "It's the sharing and mixing of culture, the diversity, and the rapid prototyping," he said. Within this topic, Smith, brought up a panel of representatives from Cloudera, Eclipse, Eucalyptus, Perrone Robotics, and Twitter--ideal manifestations of community and innovation in the world of Java.Marten Mickos, CEO, Eucalyptus Systems, explored his company's open source cloud software platform, written in Java, and used by gaming companies, technology companies, media companies, and more. Chris Aniszczyk, Operations Engineering,Twitter, noted the importance of the JVM in terms of their multiple-language development environment. Mike Olson, CEO, Cloudera, described his company's Apache Hadoop-based software, support, and training. Mike Milinkovich, Executive Director, Eclipse Foundation, noted that they have about 270 tools projects at Eclipse, with 267 of them written in Java. Milinkovich added that Eclipse will even be going into space in 2013, as part of the control software on various experiments aboard the International Space Station. Lastly, Paul Perrone, CEO, Perrone Robotics, detailed his company's robotics and automation software platform built 100% on Java, including Java SE and Java ME--"on rat, to cat, to elephant-sized systems." Milinkovic noted that communities are by nature so good at innovation because of their very openness--"The more open you make your innovation process, the more ideas are challenged, and the more developers are focused on justifying their choices all the way through the process."From there, Georges Saab, VP Development Java SE OpenJDK, continued the topic of innovation and helping the Java Community to "Make the Future Java." Martijn Verburg, representing the London Java Community (winner of a Duke's Choice Award 2012 for their activity in OpenJDK and JCP), soon joined Saab onstage. Verburg detailed the LJC's "Adopt a JSR" program--"to get day-to-day developers more involved in the innovation that's happening around them."  From its London launching pad, the innovative program has spread to Brazil, Morocco, Latvia, India, and more.Other active participants in the program joined Verburg onstage--Ben Evans, London Java Community; James Gough, Stackthread; Bruno Souza, SOUJava; Richard Warburton, jClarity; and Cecelia Borg, Oracle--OpenJDK Onboarding. Together, the group explored the goals and tasks inherent in the Adopt a JSR program--from organizing hack days (testing prototype implementations), to managing mailing lists and forums, to triaging issues, to evangelism—all with the goal of fostering greater community/developer involvement, but equally importantly, building better open standards. “Come join us, and make your ecosystem better!" urged Verburg.Paul Perrone returned to profile the latest in his company's robotics work around Java--including the AARDBOTS family of smaller robotic vehicles, running the Perrone MAX platform on top of the Java JVM. Perrone took his "Rumbles" four-wheeled robot out for a spin onstage--a roaming, ARM-based security-bot vehicle, complete with IR, ultrasonic, and "cliff" sensors (the latter, for the raised stage at JavaOne). As an ultimate window into the future of robotics, Perrone displayed a "head-set" controller--a sensor directed at the forehead to monitor brainwaves, for the someday-implementation of brain-to-robot control.Then, just when it seemed this might be the end of the day's futuristic offerings, a mystery voice from offstage pronounced "I've got some toys"--proving to be guest-visitor James Gosling, there to explore his cutting-edge work with Liquid Robotics. While most think of robots as something with wheels or arms or lasers, Gosling explained, the Liquid Robotics vehicle is an entirely new and innovative ocean-going 'bot. Looking like a floating surfboard, with an attached set of underwater wings, the autonomous devices roam the oceans using only the energy of ocean waves to propel them, and a single actuated rudder to steer. "We have to accomplish all guidance just by wiggling the rudder," Gosling said. The devices offer applications from self-installing weather buoy, to pollution monitoring station, to marine mammal monitoring device, to climate change data gathering, to even ocean life genomic sampling. The early versions of the vehicle used C code on very tiny industrial micro controllers, where they had to "count the bytes one at a time."  But the latest generation vehicles, which just hit the water a week or so ago, employ an ARM processor running Linux and the ARM version of JDK 7. Gosling explained that vehicle communication from remote locations is achieved via the Iridium satellite network. But because of the costs of this communication path, the data must be sent in very small bursts--using SBD short burst data. "It costs $1/kb, so that rules everything in the software design,” said Gosling. “If you were trying to stream a Netflix video over this, it would cost a million dollars a movie. …We don't have a 'big data' problem," he quipped. There are currently about 150 Liquid Robotics vehicles out traversing the oceans. Gosling demonstrated real time satellite tracking of several vehicles currently at sea, noting that Java is actually particularly good at AI applications--due to the language having garbage collection, which facilitates complex data structures. To close-out his time onstage, Gosling of course participated in the ceremonial Java tee-shirt toss out to the audience…In parting, Chander passed the JavaOne Community Chairperson baton to Stephen Chin, Java Technology Evangelist, Oracle. Onstage in full motorcycle gear, Chin noted that he'll soon be touring Europe by motorcycle, meeting Java Community Members and streaming live via UStream--the ultimate manifestation of community and technology!  He also reminded attendees of the upcoming JavaOne Latin America 2012, São Paulo, Brazil (December 4-6, 2012), and stated that the CFP (call for papers) at the conference has been extended for one more week. "Remember, December is summer in Brazil!" Chin said.

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Use Extension method to write cleaner code

    - by Fredrik N
    This blog post will show you step by step to refactoring some code to be more readable (at least what I think). Patrik Löwnedahl gave me some of the ideas when we where talking about making code much cleaner. The following is an simple application that will have a list of movies (Normal and Transfer). The task of the application is to calculate the total sum of each movie and also display the price of each movie. class Program { enum MovieType { Normal, Transfer } static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } else if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } } private static IEnumerable<MovieType> GetMovies() { return new List<MovieType>() { MovieType.Normal, MovieType.Transfer, MovieType.Normal }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the code above I’m using an enum, a good way to add types (isn’t it ;)). I also use one foreach loop to calculate the price, the loop has a condition statement to check what kind of movie is added to the list of movies. I want to reuse the foreach only to increase performance and let it do two things (isn’t that smart of me?! ;)). First of all I can admit, I’m not a big fan of enum. Enum often results in ugly condition statements and can be hard to maintain (if a new type is added we need to check all the code in our app to see if we use the enum somewhere else). I don’t often care about pre-optimizations when it comes to write code (of course I have performance in mind). I rather prefer to use two foreach to let them do one things instead of two. So based on what I don’t like and Martin Fowler’s Refactoring catalog, I’m going to refactoring this code to what I will call a more elegant and cleaner code. First of all I’m going to use Split Loop to make sure the foreach will do one thing not two, it will results in two foreach (Don’t care about performance here, if the results will results in bad performance, you can refactoring later, but computers are so fast to day, so iterating through a list is not often so time consuming.) Note: The foreach actually do four things, will come to is later. var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } } foreach (var movie in movies) { if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To remove the condition statement we can use the Where extension method added to the IEnumerable<T> and is located in the System.Linq namespace: foreach (var movie in movies.Where( m => m == MovieType.Normal)) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } foreach (var movie in movies.Where( m => m == MovieType.Transfer)) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will still do two things, calculate the total price, and display the price of the movie. I will not take care of it at the moment, instead I will focus on the enum and try to remove them. One way to remove enum is by using the Replace Conditional with Polymorphism. So I will create two classes, one base class called Movie, and one called MovieTransfer. The Movie class will have a property called Price, the Movie will now hold the price:   public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code has no enum and will use the new Movie classes instead: class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies.Where( m => m is Movie)) { totalPriceOfNormalMovie += movie.Price; Console.WriteLine(movie.Price); } foreach (var movie in movies.Where( m => m is MovieTransfer)) { totalPriceOfTransferMovie += movie.Price; Console.WriteLine(movie.Price); } } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If you take a look at the foreach now, you can see it still actually do two things, calculate the price and display the price. We can do some more refactoring here by using the Sum extension method to calculate the total price of the movies:   static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = movies.Where(m => m is Movie) .Sum(m => m.Price); int totalPriceOfTransferMovie = movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); foreach (var movie in movies.Where( m => m is Movie)) Console.WriteLine(movie.Price); foreach (var movie in movies.Where( m => m is MovieTransfer)) Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now when the Movie object will hold the price, there is no need to use two separate foreach to display the price of the movies in the list, so we can use only one instead: foreach (var movie in movies) Console.WriteLine(movie.Price); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we want to increase the Maintainability index we can use the Extract Method to move the Sum of the prices into two separate methods. The name of the method will explain what we are doing: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); foreach (var movie in movies) Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now to the last thing, I love the ForEach method of the List<T>, but the IEnumerable<T> doesn’t have it, so I created my own ForEach extension, here is the code of the ForEach extension method: public static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I will now replace the foreach by using this ForEach method: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(m => Console.WriteLine(m.Price)); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ForEach on the movies will now display the price of the movie, but maybe we want to display the name of the movie etc, so we can use Extract Method by moving the lamdba expression into a method instead, and let the method explains what we are displaying: movies.ForEach(DisplayMovieInfo); private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the refactoring is done! Here is the complete code:   class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(DisplayMovieInfo); } private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } pulbic static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I think the new code is much cleaner than the first one, and I love the ForEach extension on the IEnumerable<T>, I can use it for different kind of things, for example: movies.Where(m => m is Movie) .ForEach(DoSomething); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } By using the Where and ForEach extension method, some if statements can be removed and will make the code much cleaner. But the beauty is in the eye of the beholder. What would you have done different, what do you think will make the first example in the blog post look much cleaner than my results, comments are welcome! If you want to know when I will publish a new blog post, you can follow me on twitter: http://www.twitter.com/fredrikn

    Read the article

  • EM12c Release 4: New EMCLI Verbs

    - by SubinDaniVarughese
    Here are the new EM CLI verbs in Enterprise Manager 12c Release 4 (12.1.0.4). This helps you in writing new scripts or enhancing your existing scripts for further automation. Basic Administration Verbs invoke_ws - Invoke EM web service.ADM Verbs associate_target_to_adm - Associate a target to an application data model. export_adm - Export Application Data Model to a specified .xml file. import_adm - Import Application Data Model from a specified .xml file. list_adms - List the names, target names and application suites of existing Application Data Models verify_adm - Submit an application data model verify job for the target specified.Agent Update Verbs get_agent_update_status -  Show Agent Update Results get_not_updatable_agents - Shows Not Updatable Agents get_updatable_agents - Show Updatable Agents update_agents - Performs Agent Update Prereqs and submits Agent Update JobBI Publisher Reports Verbs grant_bipublisher_roles - Grants access to the BI Publisher catalog and features. revoke_bipublisher_roles - Revokes access to the BI Publisher catalog and features.Blackout Verbs create_rbk - Create a Retro-active blackout.CFW Verbs cancel_cloud_service_requests -  To cancel cloud service requests delete_cloud_service_instances -  To delete cloud service instances delete_cloud_user_objects - To delete cloud user objects. get_cloud_service_instances - To get information about cloud service instances get_cloud_service_requests - To get information about cloud requests get_cloud_user_objects - To get information about cloud user objects.Chargeback Verbs add_chargeback_entity - Adds the given entity to Chargeback. assign_charge_plan - Assign a plan to a chargeback entity. assign_cost_center - Assign a cost center to a chargeback entity. create_charge_entity_type - Create  charge entity type export_charge_plans - Exports charge plans metadata to file export_custom_charge_items -  Exports user defined charge items to a file import_charge_plans - Imports charge plans metadata from given file import_custom_charge_items -  Imports user defined charge items metadata from given file list_charge_plans - Gives a list of charge plans in Chargeback. list_chargeback_entities - Gives a list of all the entities in Chargeback list_chargeback_entity_types - Gives a list of all the entity types that are supported in Chargeback list_cost_centers - Lists the cost centers in Chargeback. remove_chargeback_entity - Removes the given entity from Chargeback. unassign_charge_plan - Un-assign the plan associated to a chargeback entity. unassign_cost_center - Un-assign the cost center associated to a chargeback entity.Configuration/Association History disable_config_history - Disable configuration history computation for a target type. enable_config_history - Enable configuration history computation for a target type. set_config_history_retention_period - Sets the amount of time for which Configuration History is retained.ConfigurationCompare config_compare - Submits the configuration comparison job get_config_templates - Gets all the comparison templates from the repositoryCompliance Verbs fix_compliance_state -  Fix compliance state by removing references in deleted targets.Credential Verbs update_credential_setData Subset Verbs export_subset_definition - Exports specified subset definition as XML file at specified directory path. generate_subset - Generate subset using specified subset definition and target database. import_subset_definition - Import a subset definition from specified XML file. import_subset_dump - Imports dump file into specified target database. list_subset_definitions - Get the list of subset definition, adm and target nameDelete pluggable Database Job Verbs delete_pluggable_database - Delete a pluggable databaseDeployment Procedure Verbs get_runtime_data - Get the runtime data of an executionDiscover and Push to Agents Verbs generate_discovery_input - Generate Discovery Input file for discovering Auto-Discovered Domains refresh_fa - Refresh Fusion Instance run_fa_diagnostics - Run Fusion Applications DiagnosticsFusion Middleware Provisioning Verbs create_fmw_domain_profile - Create a Fusion Middleware Provisioning Profile from a WebLogic Domain create_fmw_home_profile - Create a Fusion Middleware Provisioning Profile from an Oracle Home create_inst_media_profile - Create a Fusion Middleware Provisioning Profile from Installation MediaGold Agent Image Verbs create_gold_agent_image - Creates a gold agent image. decouple_gold_agent_image - Decouples the agent from gold agent image. delete_gold_agent_image - Deletes a gold agent image. get_gold_agent_image_activity_status -  Gets gold agent image activity status. get_gold_agent_image_details - Get the gold agent image details. list_agents_on_gold_image - Lists agents on a gold agent image. list_gold_agent_image_activities - Lists gold agent image activities. list_gold_agent_image_series - Lists gold agent image series. list_gold_agent_images - Lists the available gold agent images. promote_gold_agent_image - Promotes a gold agent image. stage_gold_agent_image - Stages a gold agent image.Incident Rules Verbs add_target_to_rule_set - Add a target to an enterprise rule set. delete_incident_record - Delete one or more open incidents remove_target_from_rule_set - Remove a target from an enterprise rule set. Job Verbs export_jobs - Export job details in to an xml file import_jobs - Import job definitions from an xml file job_input_file - Supply details for a job verb in a property file resume_job - Resume a job or set of jobs suspend_job - Suspend a job or set of jobs Oracle Database as Service Verbs config_db_service_target - Configure DB Service target for OPCPrivilege Delegation Settings Verbs clear_default_privilege_delegation_setting - Clears the default privilege delegation setting for a given list of platforms set_default_privilege_delegation_setting - Sets the default privilege delegation setting for a given list of platforms test_privilege_delegation_setting - Tests a Privilege Delegation Setting on a hostSSA Verbs cleanup_dbaas_requests - Submit cleanup request for failed request create_dbaas_quota - Create Database Quota for a SSA User Role create_service_template - Create a Service Template delete_dbaas_quota - Delete the Database Quota setup for a SSA User Role delete_service_template - Delete a given service template get_dbaas_quota - List the Database Quota setup for all SSA User Roles get_dbaas_request_settings - List the Database Request Settings get_service_template_detail - Get details of a given service template get_service_templates -  Get the list of available service templates rename_service_template -  Rename a given service template update_dbaas_quota - Update the Database Quota for a SSA User Role update_dbaas_request_settings - Update the Database Request Settings update_service_template -  Update a given service template. SavedConfigurations get_saved_configs  - Gets the saved configurations from the repository Server Generated Alert Metric Verbs validate_server_generated_alerts  - Server Generated Alert Metric VerbServices Verbs edit_sl_rule - Edit the service level rule for the specified serviceSiebel Verbs list_siebel_enterprises -  List Siebel enterprises currently monitored in EM list_siebel_servers -  List Siebel servers under a specified siebel enterprise update_siebel- Update a Siebel enterprise or its underlying serversSiteGuard Verbs add_siteguard_aux_hosts -  Associate new auxiliary hosts to the system configure_siteguard_lag -  Configure apply lag and transport lag limit for databases delete_siteguard_aux_host -  Delete auxiliary host associated with a site delete_siteguard_lag -  Erases apply lag or transport lag limit for databases get_siteguard_aux_hosts -  Get all auxiliary hosts associated with a site get_siteguard_health_checks -  Shows schedule of health checks get_siteguard_lag -  Shows apply lag or transport lag limit for databases schedule_siteguard_health_checks -  Schedule health checks for an operation plan stop_siteguard_health_checks -  Stops all future health check execution of an operation plan update_siteguard_lag -  Updates apply lag and transport lag limit for databasesSoftware Library Verbs stage_swlib_entity_files -  Stage files of an entity from Software Library to a host target.Target Data Verbs create_assoc - Creates target associations delete_assoc - Deletes target associations list_allowed_pairs - Lists allowed association types for specified source and destination list_assoc - Lists associations between source and destination targets manage_agent_partnership - Manages partnership between agents. Used for explicitly assigning agent partnershipsTrace Reports generate_ui_trace_report  -  Generate and download UI Page performance report (to identify slow rendering pages)VI EMCLI Verbs add_virtual_platform - Add Oracle Virtual PLatform(s). modify_virtual_platform - Modify Oracle Virtual Platform.To get more details about each verb, execute$ emcli help <verb_name>Example: $ emcli help list_assocNew resources in list verbThese are the new resources in EM CLI list verb :Certificates  WLSCertificateDetails Credential Resource Group  PreferredCredentialsDefaultSystemScope - Preferred credentials (System Scope)   PreferredCredentialsSystemScope - Target preferred credentialPrivilege Delegation Settings  TargetPrivilegeDelegationSettingDetails  - List privilege delegation setting details on a host  TargetPrivilegeDelegationSetting - List privilege delegation settings on a host   PrivilegeDelegationSettings  - Lists all Privilege Delegation Settings   PrivilegeDelegationSettingDetails - Lists details of  Privilege Delegation Settings To get more details about each resource, execute$ emcli list -resource="<resource_name>" -helpExample: $ emcli list -resource="PrivilegeDelegationSettings" -helpDeprecated Verbs:Agent Administration Verbs resecure_agent - Resecure an agentTo get the complete list of verbs, execute:$ emcli help Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Observations in Migrating from JavaFX Script to JavaFX 2.0

    - by user12608080
    Observations in Migrating from JavaFX Script to JavaFX 2.0 Introduction Having been available for a few years now, there is a decent body of work written for JavaFX using the JavaFX Script language. With the general availability announcement of JavaFX 2.0 Beta, the natural question arises about converting the legacy code over to the new JavaFX 2.0 platform. This article reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm. The Application The program chosen for migration is an implementation of the Sudoku game and serves as a reference application for the book JavaFX – Developing Rich Internet Applications. The design of the program can be divided into two major components: (1) A user interface (ideally suited for JavaFX design) and (2) the puzzle generator. For the context of this article, our primary interest lies in the user interface. The puzzle generator code was lifted from a sourceforge.net project and is written entirely in Java. Regardless which version of the UI we choose (JavaFX Script vs. JavaFX 2.0), no code changes were required for the puzzle generator code. The original user interface for the JavaFX Sudoku application was written exclusively in JavaFX Script, and as such is a suitable candidate to convert over to the new JavaFX 2.0 model. However, a few notable points are worth mentioning about this program. First off, it was written in the JavaFX 1.1 timeframe, where certain capabilities of the JavaFX framework were as of yet unavailable. Citing two examples, this program creates many of its own UI controls from scratch because the built-in controls were yet to be introduced. In addition, layout of graphical nodes is done in a very manual manner, again because much of the automatic layout capabilities were in flux at the time. It is worth considering that this program was written at a time when most of us were just coming up to speed on this technology. One would think that having the opportunity to recreate this application anew, it would look a lot different from the current version. Comparing the Size of the Source Code An attempt was made to convert each of the original UI JavaFX Script source files (suffixed with .fx) over to a Java counterpart. Due to language feature differences, there are a small number of source files which only exist in one version or the other. The table below summarizes the size of each of the source files. JavaFX Script source file Number of Lines Number of Character JavaFX 2.0 Java source file Number of Lines Number of Characters ArrowKey.java 6 72 Board.fx 221 6831 Board.java 205 6508 BoardNode.fx 446 16054 BoardNode.java 723 29356 ChooseNumberNode.fx 168 5267 ChooseNumberNode.java 302 10235 CloseButtonNode.fx 115 3408 CloseButton.java 99 2883 ParentWithKeyTraversal.java 111 3276 FunctionPtr.java 6 80 Globals.java 20 554 Grouping.fx 8 140 HowToPlayNode.fx 121 3632 HowToPlayNode.java 136 4849 IconButtonNode.fx 196 5748 IconButtonNode.java 183 5865 Main.fx 98 3466 Main.java 64 2118 SliderNode.fx 288 10349 SliderNode.java 350 13048 Space.fx 78 1696 Space.java 106 2095 SpaceNode.fx 227 6703 SpaceNode.java 220 6861 TraversalHelper.fx 111 3095 Total 2,077 79,127 2531 87,800 A few notes about this table are in order: The number of lines in each file was determined by running the Unix ‘wc –l’ command over each file. The number of characters in each file was determined by running the Unix ‘ls –l’ command over each file. The examination of the code could certainly be much more rigorous. No standard formatting was performed on these files.  All comments however were deleted. There was a certain expectation that the new Java version would require more lines of code than the original JavaFX script version. As evidenced by a count of the total number of lines, the Java version has about 22% more lines than its FX Script counterpart. Furthermore, there was an additional expectation that the Java version would be more verbose in terms of the total number of characters.  In fact the preceding data shows that on average the Java source files contain fewer characters per line than the FX files.  But that's not the whole story.  Upon further examination, the FX Script source files had a disproportionate number of blank characters.  Why?  Because of the nature of how one develops JavaFX Script code.  The object literal dominates FX Script code.  Its not uncommon to see object literals indented halfway across the page, consuming lots of meaningless space characters. RAM consumption Not the most scientific analysis, memory usage for the application was examined on a Windows Vista system by running the Windows Task Manager and viewing how much memory was being consumed by the Sudoku version in question. Roughly speaking, the FX script version, after startup, had a RAM footprint of about 90MB and remained pretty much the same size. The Java version started out at about 55MB and maintained that size throughout its execution. What About Binding? Arguably, the most striking observation about the conversion from JavaFX Script to JavaFX 2.0 concerned the need for data synchronization, or lack thereof. In JavaFX Script, the primary means to synchronize data is via the bind expression (using the “bind” keyword), and perhaps to a lesser extent it’s “on replace” cousin. The bind keyword does not exist in Java, so for JavaFX 2.0 a Data Binding API has been introduced as a replacement. To give a feel for the difference between the two versions of the Sudoku program, the table that follows indicates how many binds were required for each source file. For JavaFX Script files, this was ascertained by simply counting the number of occurrences of the bind keyword. As can be seen, binding had been used frequently in the JavaFX Script version (and does not take into consideration an additional half dozen or so “on replace” triggers). The JavaFX 2.0 program achieves the same functionality as the original JavaFX Script version, yet the equivalent of binding was only needed twice throughout the Java version of the source code. JavaFX Script source file Number of Binds JavaFX Next Java source file Number of “Binds” ArrowKey.java 0 Board.fx 1 Board.java 0 BoardNode.fx 7 BoardNode.java 0 ChooseNumberNode.fx 11 ChooseNumberNode.java 0 CloseButtonNode.fx 6 CloseButton.java 0 CustomNodeWithKeyTraversal.java 0 FunctionPtr.java 0 Globals.java 0 Grouping.fx 0 HowToPlayNode.fx 7 HowToPlayNode.java 0 IconButtonNode.fx 9 IconButtonNode.java 0 Main.fx 1 Main.java 0 Main_Mobile.fx 1 SliderNode.fx 6 SliderNode.java 1 Space.fx 0 Space.java 0 SpaceNode.fx 9 SpaceNode.java 1 TraversalHelper.fx 0 Total 58 2 Conclusions As the JavaFX 2.0 technology is so new, and experience with the platform is the same, it is possible and indeed probable that some of the observations noted in the preceding article may not apply across other attempts at migrating applications. That being said, this first experience indicates that the migrated Java code will likely be larger, though not extensively so, than the original Java FX Script source. Furthermore, although very important, it appears that the requirements for data synchronization via binding, may be significantly less with the new platform.

    Read the article

  • Easier ASP.NET MVC Routing

    - by Steve Wilkes
    I've recently refactored the way Routes are declared in an ASP.NET MVC application I'm working on, and I wanted to share part of the system I came up with; a really easy way to declare and keep track of ASP.NET MVC Routes, which then allows you to find the name of the Route which has been selected for the current request. Traditional MVC Route Declaration Traditionally, ASP.NET MVC Routes are added to the application's RouteCollection using overloads of the RouteCollection.MapRoute() method; for example, this is the standard way the default Route which matches /controller/action URLs is created: routes.MapRoute(     "Default",     "{controller}/{action}/{id}",     new { controller = "Home", action = "Index", id = UrlParameter.Optional }); The first argument declares that this Route is to be named 'Default', the second specifies the Route's URL pattern, and the third contains the URL pattern segments' default values. To then write a link to a URL which matches the default Route in a View, you can use the HtmlHelper.RouteLink() method, like this: @ this.Html.RouteLink("Default", new { controller = "Orders", action = "Index" }) ...that substitutes 'Orders' into the {controller} segment of the default Route's URL pattern, and 'Index' into the {action} segment. The {Id} segment was declared optional and isn't specified here. That's about the most basic thing you can do with MVC routing, and I already have reservations: I've duplicated the magic string "Default" between the Route declaration and the use of RouteLink(). This isn't likely to cause a problem for the default Route, but once you get to dozens of Routes the duplication is a pain. There's no easy way to get from the RouteLink() method call to the declaration of the Route itself, so getting the names of the Route's URL parameters correct requires some effort. The call to MapRoute() is quite verbose; with dozens of Routes this gets pretty ugly. If at some point during a request I want to find out the name of the Route has been matched.... and I can't. To get around these issues, I wanted to achieve the following: Make declaring a Route very easy, using as little code as possible. Introduce a direct link between where a Route is declared, where the Route is defined and where the Route's name is used, so I can use Visual Studio's Go To Definition to get from a call to RouteLink() to the declaration of the Route I'm using, making it easier to make sure I use the correct URL parameters. Create a way to access the currently-selected Route's name during the execution of a request. My first step was to come up with a quick and easy syntax for declaring Routes. 1 . An Easy Route Declaration Syntax I figured the easiest way of declaring a route was to put all the information in a single string with a special syntax. For example, the default MVC route would be declared like this: "{controller:Home}/{action:Index}/{Id}*" This contains the same information as the regular way of defining a Route, but is far more compact: The default values for each URL segment are specified in a colon-separated section after the segment name The {Id} segment is declared as optional simply by placing a * after it That's the default route - a pretty simple example - so how about this? routes.MapRoute(     "CustomerOrderList",     "Orders/{customerRef}/{pageNo}",     new { controller = "Orders", action = "List", pageNo = UrlParameter.Optional },     new { customerRef = "^[a-zA-Z0-9]+$", pageNo = "^[0-9]+$" }); This maps to the List action on the Orders controller URLs which: Start with the string Orders/ Then have a {customerRef} set of characters and numbers Then optionally a numeric {pageNo}. And again, it’s quite verbose. Here's my alternative: "Orders/{customerRef:^[a-zA-Z0-9]+$}/{pageNo:^[0-9]+$}*->Orders/List" Quite a bit more brief, and again, containing the same information as the regular way of declaring Routes: Regular expression constraints are declared after the colon separator, the same as default values The target controller and action are specified after the -> The {pageNo} is defined as optional by placing a * after it With an appropriate parser that gave me a nice, compact and clear way to declare routes. Next I wanted to have a single place where Routes were declared and accessed. 2. A Central Place to Declare and Access Routes I wanted all my Routes declared in one, dedicated place, which I would also use for Route names when calling RouteLink(). With this in mind I made a single class named Routes with a series of public, constant fields, each one relating to a particular Route. With this done, I figured a good place to actually declare each Route was in an attribute on the field defining the Route’s name; the attribute would parse the Route definition string and make the resulting Route object available as a property. I then made the Routes class examine its own fields during its static setup, and cache all the attribute-created Route objects in an internal Dictionary. Finally I made Routes use that cache to register the Routes when requested, and to access them later when required. So the Routes class declares its named Routes like this: public static class Routes{     [RouteDefinition("Orders/{customerName}->Orders/Index")]     public const string OrdersCustomerIndex = "OrdersCustomerIndex";     [RouteDefinition("Orders/{customerName}/{orderId:^([0-9]+)$}->Orders/Details")]     public const string OrdersDetails = "OrdersDetails";     [RouteDefinition("{controller:Home}*/{action:Index}*")]     public const string Default = "Default"; } ...which are then used like this: @ this.Html.RouteLink(Routes.Default, new { controller = "Orders", action = "Index" }) Now that using Go To Definition on the Routes.Default constant takes me to where the Route is actually defined, it's nice and easy to quickly check on the parameter names when using RouteLink(). Finally, I wanted to be able to access the name of the current Route during a request. 3. Recovering the Route Name The RouteDefinitionAttribute creates a NamedRoute class; a simple derivative of Route, but with a Name property. When the Routes class examines its fields and caches all the defined Routes, it has access to the name of the Route through the name of the field against which it is defined. It was therefore a pretty easy matter to have Routes give NamedRoute its name when it creates its cache of Routes. This means that the Route which is found in RequestContext.RouteData.Route is now a NamedRoute, and I can recover the Route's name during a request. For visibility, I made NamedRoute.ToString() return the Route name and URL pattern, like this: The screenshot is from an example project I’ve made on bitbucket; it contains all the named route classes and an MVC 3 application which demonstrates their use. I’ve found this way of defining and using Routes much tidier than the default MVC system, and you find it useful too

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • CodePlex Daily Summary for Saturday, April 03, 2010

    CodePlex Daily Summary for Saturday, April 03, 2010New ProjectsASP.NET MVC Demo: aspnetmvcdemoClasslessInterDomainRouting: ClasslessInterDomainRouting provides a class that is designed to detail with CIDR requests and ranges, it is developed within the C# Langauge and f...ClientSideRefactor: Plugin for Visual Studio.ColinTest: ColinTestePMS: An educational project to learn ASP.Net MVC, entity framework using vs 2010Extensible ASP.NET: Extensible Framework on top of ASP.NET - infrastructure level. Uses MEF for extensibility.Franchise Computing Model: Franchise Computing is a client-centric, contract-oriented, consumption-based computing model. Its framework allows service providers and consumers...GameEngine ReactorFX: Set of tools and code snippets for creation DirectX based games. Also provides a number of ideas, algorythms and problem-solutions.It's All Just Ones And Zeros: Utility code libraries for Vault API developers.Live Writer Picasa Plugin: Live Writer Picasa Plugin is a plugin for Windows Live Writer that allows you to embed photos from your Picasa Web Albums into your blog posts. Liv...Managed SDK for Meizu Cell Phone: The goal of this project is to deliver an open source managed SDK for Meizu cell phones, currently for M8. Media Player Field Type: Display a media player in a column of you document library. The library can contain movie files of diferent formats. The player will appear in the ...praca magisterska: This is my thesis: Algebraical aspects of modern cryptography,Pyx: An experimental programming language for statistics.SharpHydroLiDAR: A C# version of Lidar Hydrographic ExtractionSql Server Mds Destination: SSIS destination transform component for SQL Server Master Data ServicesStackOverflow.Net: A C# library for the StackOverflow API (currently in beta). Provides methods for every call currently in the StackOverflow API.TRX Merger Utility: People working on test projects that involve test management and execution from Visual Studio Team System 2008 and who do not have a TFS server for...UniPlanner: The UniPlanner project goal is to develop a web application able to visualize and schedule a university timetable.WikiNETParser: Wiki .NET Parser, Open Source project powered by ANTLR. Syntax defined in 3(4) files Lexer, Grammar, AST Parser.New ReleasesaaronERP builder - a framework to create customized ERP solutions: aaronERP_0.4.0.0: Changes (compared to version 0.3.0.0) : Businesslayer : - Caching of data-tables - ITranslatable Interface for mutli-language DAOs Web-Frontend: ...BatterySaver: Version 0.5: Add support for executing a power state event manually (Issue) Add support for battery percentage thresholds (Issue)ColinTest: asdfzxcv: asdfasdfComposer: V1.0.402.2001 Beta: Minor bug fixes Minor changes in interfaces Added documentation to the setup packageDynamic Configuration: Dynamic Configuration Release 2: Added ConfigurationChanged event fired whenever changes in .config file detected. Improved file watching filtering.Facebook Developer Toolkit: Version 3.1 BETA: Lots of bug fixes. Issues addressed: http://facebooktoolkit.codeplex.com/WorkItem/View.aspx?WorkItemId=14808 http://facebooktoolkit.codeplex.com/W...iExporter - iTunes playlist exporting: iExporter gui v2.5.0.0 - console v1.2.1.0: Paypal donate! New features and redesign for iExporter Gui You can now select/deselect all visible items with one click in the overview When yo...Line Counter: 1.5.5: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.5 Fixed bugs in C# counter an...Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.0.0: Changelog Since this is the first version there are no changes.Media Player Field Type: Media Player Field Type v1.0: Display a media player in a column of you document library. The library can contain movie files of diferent formats. The player will appear in the ...Numina Application/Security Framework: Numina.Framework Core 49601: Added .LESS library for CSS Updated default style and logo Added a few methods and method overloads to the .NET libraryOver Store: OverStore 1.16.0.0: Version 1.16.0.0 Runtime components uses PersistingRuntimeException instead of many exception types. PersistingRuntimeException message includes...patterns & practices Web Client Developer Guidance: Web Client Software Factory 2010 beta source code: The Web Client Software Factory 2010 provides an integrated set of guidance that assists architects and developers in creating web client applicati...SCSI Interface for Multimedia and Block Devices: Release 12 - View CD-DVD Drive Features: Changes in this version: - Added the ability to view the features of a CD/DVD device (e.g.: what discs it supports, whether it supports Mount Raini...SharePoint Labs: SPLab5006A-FRA-Level100: SPLab5006A-FRA-Level100 This SharePoint Lab will teach you how to create a Feature within Visual Studio, how to brand it, how to incorporate ressou...SharePoint Labs: SPLab5007A-FRA-Level300: SPLab5007A-FRA-Level300 This SharePoint Lab will teach you how to create a reusable and distributable project model for developping Features within...SharePoint Labs: SPLab5008A-FRA-Level100: SPLab5008A-FRA-Level100 This SharePoint Lab will teach you how to add an option in the ECB menu (Edit Control Block) only for specific file types w...SharePoint Labs: SPLab5009A-FRA-Level100: SPLab5009A-FRA-Level100 This SharePoint Lab will teach you the "Site Pages" model and the differences between customized/uncustomized pages (ghoste...SharePoint Labs: SPLab5010A-FRA-Level100: SPLab5010A-FRA-Level100 This SharePoint Lab will teach you the "Application Pages" model and the differences between "Site Pages" and "Application ...SharePoint Labs: SPLab5011A-FRA-Level100: SPLab5011A-FRA-Level100 This SharePoint Lab will teach you how to create a basic Application Page in the 12\TEMPLATE\LAYOUTS. Lab Language : French...sPATCH: sPatch v0.9b: + Fixed: an issue most webservers need leading slash to return filestreamsTASKedit: sTASKedit (pre-Alpha Release): This release is only for playing around, currently not useful Supported Files:Open 1.3.6 client tasks.data Export to 1.3.6 client tasks.data E...TRX Merger Utility: TRX Merger v1.0: First versionttgLib: ttgLib-0.01-beta1: In beta-version we've implemented basic functionality of ttgLib - now it can solve various problems using CPU+GPU bundle. Most important things: ...WikiNETParser: Wiki .NET Parser 2.5: Wiki .NET Parser 2.5 The documentation, binaries and source code could be downloaded from http://catarsa.com portal The latest release to downloa...WPF Zen Garden: Release 1.0: This is the first release.XNA 3D World Studio Content Pipeline: XNA 3DWS Content Pipeline - R2: This version adds terrains and brush based modelsMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsGraffiti CMSRawrjQuery Library for SharePoint Web ServicesFacebook Developer ToolkitBlogEngine.NETN2 CMSBase Class LibrariesFarseer Physics EngineLINQ to TwitterMicrosoft Biology Foundation

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance TAGS: Fusion Apps, Exalogic, BPM Suite, SOA Suite, e-Business Suite Integration

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >