Search Results

Search found 27396 results on 1096 pages for 'process template'.

Page 406/1096 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • True Excel Templates for BI Publisher

    - by Annemarie Provisero
    ADVISOR WEBCAST: True Excel Templates for BI Publisher PRODUCT FAMILY: EBS/ATG/BI Publisher  July 12, 2011 at 7am PT, 8 am MT, 10 am ET This one-hour session is recommended for technical and functional users who want to learn how to code Excel formatted layouts for use with BI Publisher to generate binary Excel output. TOPICS WILL INCLUDE: Creating a simple template Formatting Dates Creating Functions A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • 8 tips for your Windows Store apps

    - by nmarun
    1. Use Basic page than a blank template For a good number of your tasks, you probably need a Basic page. For starters, this page gives you the bare-bones required – a ‘Go back’ button and a placeholder for the applcation name. This page also contains the VisualState for Snapped view, so you don’t have to handle it in code. When you choose to add a basic page to an empty solution, you’ll get a prompt like below. Clicking on yes, adds some of the following files: LayoutAwarePage – handles GoBack and...(read more)

    Read the article

  • Why is String Templating Better Than String Concatenation from an Engineering Perspective?

    - by stephen
    I once read (I think it was in "Programming Pearls") that one should use templates instead of building the string through the use of concatenation. For example, consider the template below (using C# razor library) <in a properties file> Browser Capabilities Type = @Model.Type Name = @Model.Browser Version = @Model.Version Supports Frames = @Model.Frames Supports Tables = @Model.Tables Supports Cookies = @Model.Cookies Supports VBScript = @Model.VBScript Supports Java Applets = @Model.JavaApplets Supports ActiveX Controls = @Model.ActiveXControls and later, in a separate code file private void Button1_Click(object sender, System.EventArgs e) { BrowserInfoTemplate = Properties.Resources.browserInfoTemplate; // see above string browserInfo = RazorEngine.Razor.Parse(BrowserInfoTemplate, browser); ... } From a software engineering perspective, how is this better than an equivalent string concatentation, like below: private void Button1_Click(object sender, System.EventArgs e) { System.Web.HttpBrowserCapabilities browser = Request.Browser; string s = "Browser Capabilities\n" + "Type = " + browser.Type + "\n" + "Name = " + browser.Browser + "\n" + "Version = " + browser.Version + "\n" + "Supports Frames = " + browser.Frames + "\n" + "Supports Tables = " + browser.Tables + "\n" + "Supports Cookies = " + browser.Cookies + "\n" + "Supports VBScript = " + browser.VBScript + "\n" + "Supports JavaScript = " + browser.EcmaScriptVersion.ToString() + "\n" + "Supports Java Applets = " + browser.JavaApplets + "\n" + "Supports ActiveX Controls = " + browser.ActiveXControls + "\n" ... }

    Read the article

  • Using the JRockit Flight Recorder as an Exception Profiler.

    - by Marcus Hirt
    There is a lot of new data points in the JRockit Flight Recorder compared to the data available in the old JRA. One set of data deals with exceptions and where they are thrown. In JRA, it was possible to tell how many exceptions were thrown, but it was not possible to determine from where they were thrown. Here is how to do a recording with exception profiling enabled from JRockit Mission Control. 1. Right click on the JVM to profile, select Start Flight Recording. 2. Select the Profiling with Exceptions template.   3. Wait for the recording to finish. The count down for the time left will show in the Flight Recorder Control view. 4. When done the recording will automatically be downloaded and displayed. To show the exceptions, go to the Code | Exceptions tab.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Windows Azure Evolution - Web Sites (aka Antares) Part 1

    - by Shaun
    This is the 3rd post of my Windows Azure Evolution series, focus on the new features and enhancement which was alone with the Windows Azure Platform Upgrade June 2012, announced at the MEET Windows Azure event on 7th June. In the first post I introduced the new preview developer portal and how to works for the existing features such as cloud services, storages and SQL databases. In the second one I talked about the Windows Azure .NET SDK 1.7 on the latest Visual Studio 2012 RC on Windows 8. From this one I will begin to introduce some new features. Now let’s have a look on the first one of them, Windows Azure Web Sites.   Overview Windows Azure Web Sites (WAWS), as known as Antares, was a new feature still in preview stage in this upgrade. It allows people to quickly and easily deploy websites to a highly scalable cloud environment, uses the languages and open source apps of the choice then deploy such as FTP, Git and TFS. It also can be integrated with Windows Azure services like SQL Database, Caching, CDN and Storage easily. After read its introduction we may have a question: since we can deploy a website from both cloud service web role and web site, what’s the different between them? So, let’s have a quick compare.   CLOUD SERVICE WEB SITE OS Windows Server Windows Server Virtualization Windows Azure Virtual Machine Windows Azure Virtual Machine Host IIS IIS Platform ASP.NET WebForm, ASP.NET MVC, WCF ASP.NET WebForm, ASP.NET MVC, PHP Language C#, VB.NET C#, VB.NET, PHP Database SQL Database SQL Database, MySQL Architecture Multi layered, background worker, message queuing, etc.. Simple website with backend database. VS Project Windows Azure Cloud Service ASP.NET Web Form, ASP.NET MVC, etc.. Out-of-box Gallery (none) Drupal, DotNetNuke, WordPress, etc.. Deployment Package upload, Visual Studio publish FTP, Git, TFS, WebMatrix Compute Mode Dedicate VM Shared Across VMs, Dedicate VM Scale Scale up, scale out Scale up, scale out As you can see, there are many difference between the cloud service and web site, but the main point is that, the cloud service focus on those complex architecture web application. For example, if you want to build a website with frontend layer, middle business layer and data access layer, with some background worker process connected through the message queue, then you should better use cloud service, since it provides full control of your code and application. But if you just want to build a personal blog or a  business portal, then you can use the web site. Since the web site have many galleries, you can create them even without any coding and configuration. David Pallmann have an awesome figure explains the benefits between the could service, web site and virtual machine.   Create a Personal Blog in Web Site from Gallery As I mentioned above, one of the big feature in WAWS is to build a website from an existing gallery, which means we don’t need to coding and configure. What we need to do is open the windows azure developer portal and click the NEW button, select WEB SITE and FROM GALLERY. In the popping up windows there are many websites we can choose to use. For example, for personal blog there are Orchard CMS, WordPress; for CMS there are DotNetNuke, Drupal 7, mojoPortal. Let’s select WordPress and click the next button. The next step is to configure the web site. We will need to specify the DNS name and select the subscription and region. Since the WordPress uses MySQL as its backend database, we also need to create a MySQL database as well. Windows Azure Web Sites utilize ClearDB to host the MySQL databases. You cannot create a MySQL database directly from SQL Databases section. Finally, since we selected to create a new MySQL database we need to specify the database name and region in the last step. Also we need to accept the ClearDB’s terms as well. Then windows azure platform will download the WordPress codes and deploy the MySQL database and website. Then it will be ready to use. Select the website and click the BROWSE button, the WordPress administration page will be shown. After configured the WordPress here is my personal web blog on the cloud. It took me no more than 10 minutes to establish without any coding.   Monitor, Configure, Scale and Linked Resources Let’s click into the website I had just created in the portal and have a look on what we can do. In the website details page where are five sections. - Dashboard The overall information about this website, such as the basic usage status, public URL, compute mode, FTP address, subscription and links that we can specify the deployment credentials, TFS and Git publish setting, etc.. - Monitor Some status information such as the CPU usage, memory usage etc., errors, etc.. We can add more metrics by clicking the ADD METRICS button and the bottom as well. - Configure Here we can set the configurations of our website such as the .NET and PHP runtime version, diagnostics settings, application settings and the IIS default documents. - Scale This is something interesting. In WAWS there are two compute mode or called web site mode. One is “shared”, which means our website will be shared with other web sites in a group of windows azure virtual machines. Each web site have its own process (w3wp.exe) with some sandbox technology to isolate from others. When we need to scaling-out our web site in shared mode, we actually increased the working process count. Hence in shared mode we cannot specify the virtual machine size since they are shared across all web sites. This is a little bit different than the scaling mode of the cloud service (hosted service web role and worker role). The other mode called “dedicate”, which means our web site will use the whole windows azure virtual machine. This is the same hosting behavior as cloud service web role. In web role it will be deployed on the virtual machines we specified and all of them are only used by us. In web sites dedicate mode, it’s the same. In this mode when we scaling-out our web site we will use more virtual machines, and each of them will only host our own website. And we can specify the virtual machine size in this mode. In the developer portal we can select which mode we are using from the scale section. In shared mode we can only specify the instance count, but in dedicate mode we can specify the instance size as well as the instance count. - Linked Resource The MySQL database created alone with the creation of our WordPress web site is a linked resource. We can add more linked resources in this section.   Pricing For the web site itself, since this feature is in preview period if you are using shared mode, then you will get free up to 10 web sites. But if you are using dedicate mode, the price would be the virtual machines you are using. For example, if you are using dedicate and configured two middle size virtual machines then you will pay $230.40 per month. If there is SQL Database linked to your web site then they will be charged separately based on the Pay-As-You-Go price. For example a 1GB web edition database costs $9.99 per month. And the bandwidth will be charged as well. For example 10GB outbound data transfer costs $1.20 per month. For more information about the pricing please have a look at the windows azure pricing page.   Summary Windows Azure Web Sites gives us easier and quicker way to create, develop and deploy website to window azure platform. Comparing with the cloud service web role, the WAWS have many out-of-box gallery we can use directly. So if you just want to build a blog, CMS or business portal you don’t need to learn ASP.NET, you don’t need to learn how to configure DotNetNuke, you don’t need to learn how to prepare PHP and MySQL. By using WAWS gallery you can establish a website within 10 minutes without any lines of code. But in some cases we do need to code by ourselves. We may need to tweak the layout of our pages, or we may have a traditional ASP.NET or PHP web application which needed to migrated to the cloud. Besides the gallery WAWS also provides many features to download, upload code. It also provides the feature to integrate with some version control services such as TFS and Git. And it also provides the deploy approaches through FTP and Web Deploy. In the next post I will demonstrate how to use WebMatrix to download and modify the website, and how to use TFS and Git to deploy automatically one our code changes committed.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Elore konfigur&aacute;lt Virtualbox image-ek

    - by Lajos Sárecz
    Korábban már írtam arról, hogy Oracle VM Template-ek érhetok el a fontosabb termékeink esetében, így elkerülheto Oracle VM használata esetén az Oracle szoftverek telepítésének macerája. Azonban az Oracle VM tipikusan szerver környezetben használatos virtualizációs technológia, így aki az Oracle VM Virtualbox-ot szeretné használni saját gépén Oracle környezetek kialakítására, azok számára jó hír, hogy Virtualbox esetében is számos kész környezet, image töltheto le. Oracle Database esetén a legfrissebb 11gR2 verzió töltheto így le, mely tartalmazza az In-Memory Database Cache kiegészítést, az SQL Developer-t (Data Modeler-rel együtt), a JDeveloper-t és természetesen az Application Express-t is. Mindezt Oracle Linux-on, de van lehetoség Solaris image, Java, vagy akár SOA & BPM fejlesztoi környezet letöltésére is.

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • Find odd and even rows using $.inArray() function when using jQuery Templates

    - by hajan
    In the past period I made series of blogs on ‘jQuery Templates in ASP.NET’ topic. In one of these blogs dealing with jQuery Templates supported tags, I’ve got a question how to create alternating row background. When rendering the template, there is no direct access to the item index. One way is if there is an incremental index in the JSON string, we can use it to solve this. If there is not, then one of the ways to do this is by using the jQuery’s $.inArray() function. - $.inArray(value, array) – similar to JavaScript indexOf() Here is an complete example how to use this in context of jQuery Templates: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <style type="text/css">         #myList { cursor:pointer; }                  .speakerOdd { background-color:Gray; color:White;}         .speaker { background-color:#443344; color:White;}                  .speaker:hover { background-color:White; color:Black;}         .speakerOdd:hover { background-color:White; color:Black;}     </style>     <title>jQuery ASP.NET</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         var speakers = [             { Name: "Hajan1" },             { Name: "Hajan2" },             { Name: "Hajan3" },             { Name: "Hajan4" },             { Name: "Hajan5" }         ];         $(function () {             $("#myTemplate").tmpl(speakers).appendTo("#myList");         });         function oddOrEven() {             return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd";         }     </script>     <script id="myTemplate" type="text/x-jquery-tmpl">         <tr class="${oddOrEven()}">             <td> ${Name}</td>         </tr>     </script> </head> <body>     <table id="myList"></table> </body> </html> So, I have defined stylesheet classes speakerOdd and speaker as well as corresponding :hover styles. Then, you have speakers JSON string containing five items. And what is most important in our case is the oddOrEven function where $.inArray(value, data) is implemented. function oddOrEven() {     return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd"; } Remark: The $.inArray() method is similar to JavaScript's native .indexOf() method in that it returns -1 when it doesn't find a match. If the first element within the array matches value, $.inArray() returns 0. From http://api.jquery.com/jQuery.inArray/ So, now we can call oddOrEven function from inside our jQuery Template in the following way: <script id="myTemplate" type="text/x-jquery-tmpl">     <tr class="${oddOrEven()}">         <td> ${Name}</td>     </tr> </script> And the result is I hope you like it. Regards, Hajan

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • T4Toolbox and Visual Studio 2010

    - by Ben Griswold
    I’ve been using the T4Toolbox to help generate my ASP.NET MVC models and scaffolding for a while now.  Another developer tried using my generator project last week and ran into troubles due to a breaking change around the RenderCore() and TransformText() methods in support for VS 2010.  If you upgraded to the latest version of T4Toolbox and receive a build error similar to the following, you are probably in the same boat: GeneratedTextTransformation.[Template].RenderCore(): no suitable method found to override We took the easy way out.  I had him uninstall the latest version of T4Toolbox and install version 9.7.25.1 which my templates were initially coded against.  For now, that worked great, but it sounds like I’ll be doing some rework of the 20+ templates in my project to support Visual Studio 2010 when we migrate later this month.

    Read the article

  • Donald Ferguson says end-user programming is next big thing. Is it?

    - by Joris Meys
    You can guess how I came to ask this question... Anyway : http://www.bbc.co.uk/news/business-11944966 Donald Ferguson claiming that his websphere was his biggest disaster and proclaiming that end-user programming will be the way forward. This genuinely spurs the question : what with current programming languages. Honestly, I don't think that end-user programming will go much beyond a rather rigid template where you can build some apps around. If you see how many people actually manage to understand the basic functionality of functions in EXCEL... Plus, I fail to see how complex and performant systems can be built in such an end-user programming language ( Visual Basic, anyone?) Nice to play around with, but for many applications they're just not the thing. So no worries for the old languages if you ask me. What's your ideas?

    Read the article

  • How does key-based caching work?

    - by Dominic Santos
    I recently read an article on the 37Signals blog and I'm left wondering how it is that they get the cache key. It's all well and good having a cache key that includes the object's timestamp (this means that when you update the object the cache will be invalidated); but how do you then use the cache key in a template without causing a DB hit for the very object that you are trying to fetch from the cache. Specifically, how does this affect One to Many relations where you are rendering a Post's Comments for example. Example in Django: {% for comment in post.comments.all %} {% cache comment.pk comment.modified %} <p>{{ post.body }}</p> {% endcache %} {% endfor %} Is caching in Rails different to just requests to memcached for example (I know that they convert your cache key to something different). Do they also cache the cache key?

    Read the article

  • Webcast Replay Available: Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast Scrambling Sensitive Data in EBS 12 Cloned Environments (Presentation) Eric Bing, Senior Director, Jagan Athreya, Enterprise Manager Product Management, and Elke Phelps, Senior Principal Product Manager, discussed the Oracle E-Business Suite Template for Data Masking Pack, and how it can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. (July 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Monitoring your WCF Web Apis with AppFabric

    - by cibrax
    The other day, Ron Jacobs made public a template in the Visual Studio Gallery for enabling monitoring capabilities to any existing WCF Http service hosted in Windows AppFabric. I thought it would be a cool idea to reuse some of that for doing the same thing on the new WCF Web Http stack. Windows AppFabric provides a dashboard that you can use to dig into some metrics about the services usage, such as number of calls, errors or information about different events during a service call. Those events not only include information about the WCF pipeline, but also custom events that any developer can inject and make sense for troubleshooting issues.      This monitoring capabilities can be enabled on any specific IIS virtual directory by using the AppFabric configuration tool or adding the following configuration sections to your existing web app, <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <diagnostics etwProviderId="3e99c707-3503-4f33-a62d-2289dfa40d41"> <endToEndTracing propagateActivity="true" messageFlowTracing="true" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name=""> <etwTracking profileName="EndToEndMonitoring Tracking Profile" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel>   <microsoft.applicationServer> <monitoring> <default enabled="true" connectionStringName="ApplicationServerMonitoringConnectionString" monitoringLevel="EndToEndMonitoring" /> </monitoring> </microsoft.applicationServer> Bad news is that none of the configuration above can be easily set on code by using the new configuration model for WCF Web stack.  A good thing is that you easily disable it in the configuration when you no longer need it, and also uses ETW, a general-purpose and high-speed tracing facility provided by the operating system (it’s part of the windows kernel). By adding that configuration section, AppFabric will start monitoring your service automatically and providing some basic event information about the service calls. You need some custom code for injecting custom events in the monitoring data. What I did here is to copy and refactor the “WCFUserEventProvider” class provided as sample in the Ron’s template to make it more TDD friendly when using IoC. I created a simple interface “ILogger” that any service (or resource) can use to inject custom events or monitoring information in the AppFabric database. public interface ILogger { bool WriteError(string name, string format, params object[] args); bool WriteWarning(string name, string format, params object[] args); bool WriteInformation(string name, string format, params object[] args); } The “WCFUserEventProvider” class implements this interface by making possible to send the events to the AppFabric monitoring database. The service or resource implementation can receive an “ILogger” as part of the constructor. [ServiceContract] [Export] public class OrderResource { IOrderRepository repository; ILogger logger;   [ImportingConstructor] public OrderResource(IOrderRepository repository, ILogger logger) { this.repository = repository; this.logger = logger; }   [WebGet(UriTemplate = "{id}")] public Order Get(string id, HttpResponseMessage response) { var order = this.repository.All.FirstOrDefault(o => o.OrderId == int.Parse(id, CultureInfo.InvariantCulture)); if (order == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent("Order not found"); }   this.logger.WriteInformation("Order Requested", "Order Id {0}", id);   return order; } } The example above uses “MEF” as IoC for injecting a repository and the logger implementation into the service. You can also see how the logger is used to write an information event in the monitoring database. The following image illustrates how the custom event is injected and the information becomes available for any user in the dashboard. An issue that you might run into and I hope the WCF and AppFabric teams fixed soon is that any WCF service that uses friendly URLs with ASP.NET routing does not get listed as a available service in the WCF services tab in the AppFabric console. The complete example is available to download from here.

    Read the article

  • Android Cocos2DX using C++ in Eclipse Helios Windows XP

    - by 25061987
    I have used Eclipse Helios 3.6.1 for Java development. I wanted to start C++ development in the same IDE so I installed Autotools Support For CDT, C/C++ Development Tools, C/C++ Library API Documentation Hover Help plugins.I have included #include "cocos2d.h" in my HelloWorldScene.h file now when writing the below statement cocos2d::CCSprite * ccSprite; I am not getting auto completion bar(template proposals) on writing like coco and pressing Ctrl + Space from my keyboard. What can be the problem?This might help you solve my problem. Please check here. This is what I got after clicking Right Click Project - Index - Search for Unresolved Index. But I have added all includes check here. I think this is causing problem in Content Assist. What should I do in this case? Inclusion seems proper.

    Read the article

  • CodePlex Daily Summary for Monday, March 19, 2012

    CodePlex Daily Summary for Monday, March 19, 2012Popular ReleasesHarness: Harness 2.0.1: working on Windows 7 (x64) (not used shell32.dll) Speed ​​up operation Vista/IE7 support (x86 and x64) Minor bug fixesSCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...Krempel's Windows Phone 7 project: DelayLoadImage release: For documentation check; http://thewp7dev.wordpress.com The source code depends on the HTMLAgilityPack wich can be downloaded here http://htmlagilitypack.codeplex.com/SourceControl/changeset/changes/94773. And the System.Xml.XPath.dll wich is part of the Silverlight SDK and located in "C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Libraries\Client" or in "C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client".SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 11: 1. added process visualizer to show the internal process model of SQL Server through hierachical chart. 2. fixed a few bugs, sorry.WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionDaun Management Studio: Daun Management Studio 0.1 (Alpha Version): These are these the alpha application packages for Daun Management Studio to manage MongoDB Server. Please visit our official website http://www.daun-project.comRiP-Ripper & PG-Ripper: RiP-Ripper 2.9.28: changes NEW: Added Support for "PixHub.eu" linksSmartNet: V1.0.0.0: DY SmartNet ?????? V1.0callisto: callisto 2.0.21: Added an option to disable local host detection.MyRouter (Virtual WiFi Router): MyRouter 1.0.6: This release should be more stable there were a few bug fixes including the x64 issue as well as an error popping up when MyRouter started this was caused by a NULL valuePulse: Pulse Beta 4: This version is still in development but should include: Logging and error handling have been greatly improved. If you run into an error or Pulse crashes make sure to check the Log folder for a recently modified log file so you can report the details of the issue A bunch of new features for the Wallbase.cc provider. Cleaner separation between inputs, downloading and output. Input and downloading are fairly clean now but outputs are still mixed up in the mix which I'm trying to resolve ...Google Books Downloader for Windows: Google Books Downloader-2.0.0.0.: Google Books DownloaderFinestra Virtual Desktops: 2.5.4501: This is a very minor update release. Please see the information about the 2.5 and 2.5.4500 releases for more information on recent changes. This update did not even have an automatic update triggered for it. Adds error checking and reporting to all threads, not only those with message loopsAcDown????? - Anime&Comic Downloader: AcDown????? v3.9.2: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...ArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.0 Release Candidate: Your feedback is welcome - and this is your last chance to get your fixes in for this version! Includes installer for both Feature Server extension and Desktop extension, enhanced functionality for the Desktop tools, and enhanced built-in Javascript Editor for the Feature Server component. This release candidate includes fixes to beta 4 that accommodate domain users for setting up the Server Component, and fixes for reporting/uploading references tracked in the revision table. See Code In-P...C.B.R. : Comic Book Reader: CBR 0.6: 20 Issue trackers are closed and a lot of bugs too Localize view is now MVVM and delete is working. Added the unused flag (take care that it goes to true only when displaying screen elements) Backstage - new input/output format choice control for the conversion Backstage - Add display, behaviour and register file type options in the extended options dialog Explorer list view has been transformed to a custom control. New group header, colunms order and size are saved Single insta...Windows Azure Toolkit for Windows 8: Windows Azure Toolkit for Windows 8 Consumer Prv: Windows Azure Toolkit for Windows 8 Consumer Preview - Preview Release v1.2.1Minor updates to setup experience: Check for WebPI before install Dependency Check updated to support the following VS 11 and VS 2010 SKUs Ultimate, Premium, Professional and Express Certs Windows Azure Toolkit for Windows 8 Consumer Preview - Preview Release v1.2.0 Please download this for Windows Azure Toolkit for Windows 8 functionality on Windows 8 Consumer Preview. The core features of the toolkit include:...Facebook Graph Toolkit: Facebook Graph Toolkit 3.0: ships with JSON Toolkit v3.0, offering parse speed up to 10 times of last version supports Facebook's new auth dialog supports new extend access token endpoint new example Page Tab app filter Graph Api connections using dates fixed bugs in Page Tab appsScintillaNET: ScintillaNET 2.4: 3/12/2012 Jacob Slusser Added support for annotations. Issues Fixed with this Release Issue # Title 25012 25012 25018 25018 25023 25023 25014 25014 New ProjectsAlt Info Revised: Alt Info Revised is a modification for Heroes of Newerth which provides additional information and improved visuals.Android XML parser for .NET: A library for parsing Android binary XML format. You could use it to parse AndroidManifest.xml inside the APK files.C++ DSV Filter Library: The C++ DSV Filter Library is a simple to use, easy to integrate and extremely efficient and fast CSV/DSV in-memory data store processing library. The DSV filter allows for the efficient evaluation of complex expressions on a per row basis upon the loaded DSV store.CDSHOPMVC: CD Shop Project.Computation Visualizer: ???????????? ????????Create Hyper-V Server USB Memory: A simple application to automate the preparation process for booting Hyper-V Server 2008 R2. USB Flash Memory. csv viewer for large files: designed specifically for geonames.org file. this file lists all cities in the world excepts usa cities. this software allow reading in columns large file in a jtable. the other class generates a png with this coordinates latitude and longitude. a point is plot for each city.FIM CM Tools: FIM CM Tools are tools and samples written in C# for the Microsoft Forefront Identity Manager - Certificate Management Component. Based on FIMCM 2010. Currently it consists of: - Logging notification handlerFONIS telefonski imenik: FONIS telefonski imenik je program koji Vam omogucava da napravite i održavate telefonski imenik na Vašem racunaru. Program je razvijen u C#. Za instaliranje i pokretanje ovog programa, potrebno je instalirati .NET Framework 4.helmi: projet pfe helmi et moezHNQS: HNQSHow to Tie a Tie: A simple How to tie a tie appLiuyi.Phone.PhoneScreenSaver: PhoneScreenSaver Liuyi.Phone.PhoneScreenSaver windows phonemasterapp: Centraliza a informação de vários sites.POC using ASP.NET Web API: A Simple POC which uses : ASP.NET WebAPI AdventureWorksLT2008R2 Table AutoFac ( Dependancy Injection ) Restful Service JQuery Template Functionality : User search for a product Add/remove product to cart user can register himself for a new account Submit an orderRayBullet .Net Enterprise Application Libraries: RayBullet .Net Enterprise Application Libraries is a set of libraries for enterprise application development on Microsoft .Net framework platform. ReflectInsight Logging Extensions: ReflectInsight logging library extensions for 3rd party integration with Log4net, NLog, PostSharp, Enterprise Library and Visual Studio Trace. The ReflectInsight logging extentions make it easier to integrate you existing application logging infrastructure with the ReflectInsight viewer. You'll never need to look at your logging files in a text editor again and you'll have the full power of our viewer for searching, filtering and navigating your log files. The extensions are developed i...Responsive MVVM: MVVM is a great framework for Silverlight and WPF development. But the major flaw with MVVM is with its responsiveness. When the number of user controls increase beyond a certain limit, the UI gets very slow. Responsive MVVM framework aims to make the UI more responsive.road traffic modelling: Program simullates road traffic and managementSharePoint CAML Query Helper for 2007 and 2010: Use this program to help build and test SharePoint CAML Queries (Collaborative Application Markup Language). Compatible with WSS 3.0, MOSS 2007, Foundation 2010, and SharePoint 2010. Uses the SharePoint Object Model to connect to a site (using a URL). Gets all webs in a site, all lists in a web, and all fields/columns in a list. Can export field information to CSV. Also provides interface for building XML CAML Queries, with tools to make it easier managing field names (using drag-drop and cop...Speed up Printer migration using PrintBrm and it's configuration files: This tool is used to create the BrmConfig.XML file that can be used for quickly restoring all the print queues using the Generic / Text Only driver when migrating from a 32bit to a 64bit server.Ultimate Framework (Silverlight Navigation with Prism and Unity): Ultimate framework enables you to easily implement URL driven Silverlight LOB Application, leveraging Prism 4 and Unity for a Modular / Decoupled approach. Supporting nested and parallel frames navigation in Silverlight with any UserControl object within Silverlight.Windows 8 Metro WinRT Channel9 Viewer: Sample Metro App for viewing Channel 9 Videos.

    Read the article

  • Recommended requirements when outsourcing xhtml/css site building?

    - by András Szepesházi
    I'm considering outsourcing a part of our web application development project for freelancers, namely the site building part. What I mean by site building is the process of creating the xhtml/css template files, with dummy content, from a psd file (or any other graphical layout file). The resulting xhtml/css files will be used by our developers as templates for cms based page rendering. The cms in this case is Drupal, but that might not be of much relevance. I'm looking for a good set of requirements, that will result in good quality xhtml/css code, complying with today's standards leaves little to the freelancer developer's imagination in terms of what I need I'm thinking about requirements like: Valid XHTML 1.0 Transitional document type, validated by validator.w3.org Identical rendering in all modern browsers (FF, Chrome, Safari, Opera, IE7-8) and also in IE6 All opening and closing block-level elements should be properly commented, referencing the functional part of the user interface they belong to (menu, toolbar, content, etc) No inline CSS definitions And so on. How would you organize a list like that? What requirements would you add?

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • CodePlex Daily Summary for Friday, November 01, 2013

    CodePlex Daily Summary for Friday, November 01, 2013Popular ReleasesFamily Tree Analyzer: Version 3.0.3.0-beta test3: Count Isle of Man as England & Wales for Census Lost Cousins text was preventing drag n drop of files Lost Cousins no Countries form wont open more than one copy Changed sort order of Lost Cousins buttons to match report which matches Lost Cousins website UK Census date verification is only applied to UK facts Counts duplicate Lost Cousins facts Added report for Lost Cousins facts but no census fact Added Census Duplicate and Census Missing Location reports BEF & AFT residence facts were coun...uComponents: uComponents v6.0.0: This release of uComponents will compile against and support the new API in Umbraco v6.1.0. What's new in uComponents v6.0.0? New features / Resolved issuesThe following workitems have been implemented and/or resolved: 14781 14805 14808 14818 14854 14827 14868 14859 14790 14853 14790 DataType Grid 14788 14810 Drag & Drop support for rows Support for 11 new datatypes (to a total of 20 datatypes): Color Picker Dropdown Checklist Enum Checboxlist Enum Dropdownl...Local History for Visual Studio: Local History - 1.0: New - Support for Visual Studio 2013SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.1: New FeaturesAdded option Limit to current basket subtotal to HadSpentAmount discount rule Items in product lists can be labelled as NEW for a configurable period of time Product templates can optionally display a discount sign when discounts were applied Added the ability to set multiple favicons depending on stores and/or themes Plugin management: multiple plugins can now be (un)installed in one go Added a field for the HTML body id to store entity (Developer) New property 'Extra...Community Forums NNTP bridge: Community Forums NNTP Bridge V54 (LiveConnect): This is the first release which can be used with the new LiveConnect authentication. Fixes the problem that the authentication will not work after 1 hour. Also a logfile will now be stored in "%AppData%\Community\CommunityForumsNNTPServer". If you have any problems please feel free to sent me the file "LogFile.txt".WPF Extended DataGrid: WPF Extended DataGrid 2.0.0.9 binaries: Fixed issue with ICollectionView containg null values (AutoFilter issue)Community TFS Build Extensions: October 2013: The October 2013 release contains Scripts - a new addition to our delivery. These are a growing library of PowerShell scripts to use with VS2013. See our documentation for more on scripting. VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 Community TFS Build Manager VS2013 The Community TFS Build Managers for VS2010, 2012 and 2013 can also be found in the Visual Studio Gallery where upda...SuperSocket, an extensible socket server framework: SuperSocket 1.6 stable: Changes included in this release: Process level isolation SuperSocket ServerManager (include server and client) Connect to client from server side initiatively Client certificate validation New configuration attributes "textEncoding", "defaultCulture", and "storeLocation" (certificate node) Many bug fixes http://docs.supersocket.net/v1-6/en-US/New-Features-and-Breaking-ChangesBarbaTunnel: BarbaTunnel 8.1: Check Version History for more information about this release.Collections2: Collections2 v.1.1: Collection2 v.1.1 is the 1st developer and user release of the Collections2 library. The library contains 3 classes TwoWayDictionary<S,T>, TwoWayDictionary<T>, and TwoWayDictException. Binaries are available for .NET 2.0-3.5 (Binary Collections2NET2.0 v.1.1.1.1) and .NET 4.0 (Binary Collections2NET4 v.1.1.1.1). Also available are a source code zip file (Source Collections2NET4 v.1.1.1.1.zip) containing a solution file, and, example console and library project files for Visual Studio 2010. ...Mugen MVVM Toolkit: Mugen MVVM Toolkit 2.1: v 2.1 Added the 'Should' class instead of the 'Validate' class. The 'Validate' class is now obsolete. Added 'Toolkit.Annotations' to support the Mugen MVVM Toolkit ReSharper plugin. Updated JetBrains annotations within the project. Added the 'GlobalSettings.DefaultActivationPolicy' property to represent the default activation policy. Removed the 'GetSettings' method from the 'ViewModelBase' class. Instead of it, the 'GlobalSettings.DefaultViewModelSettings' property is used. Updated...NAudio: NAudio 1.7: full release notes available at http://mark-dot-net.blogspot.co.uk/2013/10/naudio-17-release-notes.htmlDirectX Tool Kit: October 2013: October 28, 2013 Updated for Visual Studio 2013 and Windows 8.1 SDK RTM Added DGSLEffect, DGSLEffectFactory, VertexPositionNormalTangentColorTexture, and VertexPositionNormalTangentColorTextureSkinning Model loading and effect factories support loading skinned models MakeSpriteFont now has a smooth vs. sharp antialiasing option: /sharp Model loading from CMOs now handles UV transforms for texture coordinates A number of small fixes for EffectFactory Minor code and project cleanup ...ExtJS based ASP.NET Controls: FineUI v4.0beta1: +2013-10-28 v4.0 beta1 +?????Collapsed???????????????。 -????:window/group_panel.aspx??,???????,???????,?????????。 +??????SelectedNodeIDArray???????????????。 -????:tree/checkbox/tree_checkall.aspx??,?????,?????,????????????。 -??TimerPicker???????(????、????ing)。 -??????????????????????(???)。 -?????????????,??type=text/css(??~`)。 -MsgTarget???MessageTarget,???None。 -FormOffsetRight?????20px??5px。 -?Web.config?PageManager??FormLabelAlign???。 -ToolbarPosition??Left/Right。 -??Web.conf...CODE Framework: 4.0.31028.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.VidCoder: 1.5.10 Beta: Broke out all the encoder-specific passthrough options into their own dropdown. This should make what they do a bit more clear and clean up the codec list a bit. Updated HandBrake core to SVN 5855.Indent Guides for Visual Studio: Indent Guides v14: ImportantThis release has a separate download for Visual Studio 2010. The first link is for VS 2012 and later. Version History Changed in v14 Improved performance when scrolling and editing Fixed potential crash when Resharper is installed Fixed highlight of guides split around pragmas in C++/C# Restored VS 2010 support as a separate download Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.3 (mvc5): version 3.5.3 - support for mvc5 version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 ========================== - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js...Media Companion: Media Companion MC3.585b: IMDB plot scraping Fixed. New* Movie - Rename Folder using Movie Set, option to move ignored articles to end of Movie Set, only for folder renaming. Fixed* Media Companion - Fixed if using profiles, config files would blown up in size due to some settings duplicating. * Ignore Article of An was cutting of last character of movie title. * If Rescraping title, sort title changed depending on 'Move article to end of Sort Title' setting. * Movie - If changing Poster source order, list would beco...Custom Controls for TFS Work Item Tracking: 1.2.0.0: This is a maintenance release that adds support for TFS 2013. The control pack contains the following work item custom controls: MultiValueControl: a ComboBox to accept and show multiple values for a field by showing a list of checkboxes. More details here: MultiValue Control ScreenshotControl: a simple control (button) to capture a screenshot as a work item attachment. More details here: Screenshot controls. AttachmentsControl: this control cab be used as an alternative to the standar...New Projects3DChemFold: TODOAcc Oct2013 Liq2 MVC3EF5: Proyecto hecho en mvc3 con Entity Framework 5 para liquidación de sueldosAssociated List Box Control: A server side control combining list boxes and buttons to move items between lists. This has custom list boxes developed from a user control in ASP.Net website.AutoSPCUInstaller: Install the SharePoint Cumulative Update without pain. Use the AutoSPCUInstaller for a clean an easy installation process.Catalano Secure Delete: Delete your privacy data permanently.Duplica: Duplica is console (Command Prompt) tool aimed to duplicate existing file into a desired destination.High performance xll add-ins for Excel: A modern C++ interface to the Microsoft Excel SDK.Home Server SMART Classic: Home Server SMART is a hard disk and SSD health monitoring, reporting and alerting tool for Windows Home Server.Message Locked Encryption Library for .NET: A MLE (Message Locked Encryption) library developed for the .NET FrameworkMuServer: Web server mainly for streaming media to web browsing capable devices.MVA: Code examples from the book Programming in C# Exam Ref 70-483.NETCommon: This is only common library for NETonioncry: onion makes the aliens cry :(sfast: cms、??????、??????SPWrap - TileBoard Webpart: SPWrap Suite: Contains web parts that are wrapping existing libraries and features into reusable & fully configurable SharePoint components.TFS Build Launcher: A single command line utility to launch your TFS builds, allowing you to pass values into build process parameters.UsefulTools: A collection of useful extension methods and utility classes for .NET in general and WPFVirtual Book Libreria Online: Proyecto Libreria entity frame 5.0XML File Editor: XML Editor

    Read the article

  • Accessibility Update: The Oracle ETPM v2.3 VPAT is now available on oracle.com

    - by Rick Finley
    Oracle Tax is committed to building accessible applications.  Oracle uses a VPAT (Voluntary Product Accessibility Template) to document the accessibility status of each product.  The VPAT was created by a partnership of the Information Technology Industry Council (ITI) and the U.S. General Services Administration (GSA) to create a simple document that could be used by US Federal contracting and procurement officials to evaluate a product with respect to the provisions contained in Section 508 of the Americans with Disabilities Act (ADA). The Oracle ETPM v2.3 VPAT is now available on oracle.com:  http://www.oracle.com/us/corporate/accessibility/templates/t2455.html    

    Read the article

  • New release of the Windows Azure SDK and Tools (March CTP)

    - by kaleidoscope
    From now on, you only have to download the Windows Azure Tools for Microsoft Visual Studio and the SDK will be installed as part of that package. What’s new in Windows Azure SDK Support for developing Managed Full Trust applications. It also provides support for Native Code via PInvokes and spawning native processes. Support for developing FastCGI applications, including support for rewrite rules via URL Rewrite Module. Improved support for the integration of development storage with Visual Studio, including enhanced performance and support for SQL Server (only local instance). What’s new in Windows Azure Tools for Visual Studio Combined installer includes the Windows Azure SDK Addressed top customer bugs. Native Code Debugging. Update Notification of future releases. FastCGI template http://blogs.msdn.com/jnak/archive/2009/03/18/now-available-march-ctp-of-the-windows-azure-tools-and-sdk.aspx   Ritesh, D

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >