Search Results

Search found 16230 results on 650 pages for 'three js'.

Page 582/650 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Integration with Multiple Versions of BizTalk HL7 Accelerator Schemas

    - by Paul Petrov
    Microsoft BizTalk Accelerator for HL7 comes with multiple versions of the HL7 implementation. One of the typical integration tasks is to receive one format and transmit another. For example, system A works HL7 v2.4 messages, system B with v2.3, and system C with v2.2. The system A is exchanging messages with B and C. The logical solution is to create schemas in separate namespaces for each system and assign maps on send ports. Schematic diagram of the messaging solution is shown below:   Nothing is complex about that conceptually. On the implementation level things can get nasty though because of the elaborate nature of HL7 schemas and sheer amount of message types involved. If trying to implement maps directly in BizTalk Map Editor one would quickly get buried by thousands of links between subfields of HL7 segments. Since task is repetitive because HL7 segments are reused between message types it's natural to take advantage of such modular structure and reduce amount of work through reuse. Here's where it makes sense to switch from visual map editor to old plain XSLT. The implementation is done in three steps. First, create XSL templates to map from segments of one version to another. This can be done using BizTalk Map Editor subsequently copying and modifying generated XSL code to create one xsl:template per segment. Group all segments for format mapping in one XSL file (we call it SegmentTemplates.xsl). Here's how template for the PID segment (Patient Identification) would look like this: <xsl:template name="PID"> <PID_PatientIdentification> <xsl:if test="PID_PatientIdentification/PID_1_SetIdPatientId"> <PID_1_SetIdPid> <xsl:value-of select="PID_PatientIdentification/PID_1_SetIdPatientId/text()" /> </PID_1_SetIdPid> </xsl:if> <xsl:for-each select="PID_PatientIdentification/PID_2_PatientIdExternalId"> <PID_2_PatientId> <xsl:if test="CX_0_Id"> <CX_0_Id> <xsl:value-of select="CX_0_Id/text()" /> </CX_0_Id> </xsl:if> <xsl:if test="CX_1_CheckDigit"> <CX_1_CheckDigitSt> <xsl:value-of select="CX_1_CheckDigit/text()" /> </CX_1_CheckDigitSt> </xsl:if> <xsl:if test="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed"> <CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> <xsl:value-of select="CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed/text()" /> </CX_2_CodeIdentifyingTheCheckDigitSchemeEmployed> . . . // skipped for brevity This is the most tedious and time consuming part. Templates can be created for only those segments that are used in message interchange. Once this is done the rest goes much easier. The next step is to create message type specific XSL that references (imports) segment templates XSL file. Inside this file simple call segment templates in appropriate places. For example, beginning of the mapping XSL for ADT_A01 message would look like this:   <xsl:import href="SegmentTemplates_23_to_24.xslt" />  <xsl:output omit-xml-declaration="yes" method="xml" version="1.0" />   <xsl:template match="/">    <xsl:apply-templates select="s0:ADT_A01_23_GLO_DEF" />  </xsl:template>   <xsl:template match="s0:ADT_A01_23_GLO_DEF">    <ns0:ADT_A01_24_GLO_DEF>      <xsl:call-template name="EVN" />      <xsl:call-template name="PID" />      <xsl:for-each select="PD1_PatientDemographic">        <xsl:call-template name="PD1" />      </xsl:for-each>      <xsl:call-template name="PV1" />      <xsl:for-each select="PV2_PatientVisitAdditionalInformation">        <xsl:call-template name="PV2" />      </xsl:for-each> This code simply calls segment template directly for required singular elements and in for-each loop for optional/repeating elements. And lastly, create BizTalk map (btm) that references message type specific XSL. It is essentially empty map with Custom XSL Path set to appropriate XSL: In the end, you will end up with one segment templates file that is referenced by many message type specific XSL files which in turn used by BizTalk maps. Once all segment maps are created they are widely reusable and all the rest work is very simple and clean.

    Read the article

  • BizTalk 2009 - Installing BizTalk Server 2009 on XP for Development

    - by StuartBrierley
    At my previous employer, when developing for BizTalk Server 2004 using Visual Studio 2003, we made use of separate development and deployment environments; developing in Visual Studio on our client PCs and then deploying to a seperate shared BizTalk 2004 Server from there.  This server was part of a multi-server Standard BizTalk environment comprising of separate BizTalk Server 2004 and SQL Server 2000 servers.  This environment was implemented a number of years ago by an outside consulting company, and while it worked it did occasionally cause contention issues with three developers deploying to the same server to carry out unit testing! Now that I am making the design and implementation decisions about the environment that BizTalk will be developed in and deployed to, I have chosen to create a single "server" installation on my development PC, installling SQL Server 2008, Visual Studio 2008 and BizTalk Server 2009 on a single system.  The client PC in use is actually a MacBook Pro running Windows XP; not the most powerful of systems for high volume processing but it should be powerful enough to allow development and initial unit testing to take place. I did not need to, and so chose not to, install all of the components detailed in the Microsoft guide for installing BizTalk 2009 on Windows XP but I did follow the basics of the procedures detailed within.  Outlined below are the highlights of this process and any details of what choices I made.   Install IIS I had previsouly installed Windows XP, including all current service packs and critical updates.  At the time of installation this included Service Pack 3, the .Net Framework 3.5 and MS Windows Installer 3.1.  Having a running XP system, my first step was to install IIS - this is quite straightforward and posed no difficulties. Install Visual Studio 2008 The next step for me was to install Visual Studio 2008.  Making sure to select a custom installation is crucial at this point, as you need to make sure that you deselect SQL Server 2005 Express Edition as it can cause the BizTalk installation to fail.  The installation guide suggests that you only select Visual C# when selecting features to install, but  I decided that due to some legacy systems I have code for that I would also select the VB and ASP options. Visual Studio 2008 Service Pack 1 Following the completion of the installation of Visual Studio itself you should then install the Visual Studio 2008 Service Pack 1. SQL Server 2008 Standard Edition The next step before intalling BizTalk Server 2009 itself is to install SQL Server 2008 Standard Edition. On the feature selection screen make sure that you select the follwoing options: Database Engine Services SQL Server Replication Full-Text Search Analysis Services Reporting Services Business Intelligence Development Studio Client Tools Connectivity Integration Services Management Tools Basic and Complete Use the default instance and the same accounts for all SQL server instances - in my case I used the Network Service and Local Service accounts for the two sets of accounts. On the database engine configuration screen I selected windows authentication and added the current user, adding the same user again on the Analysis services Configuration screen.  All other screens were left on the default settings. The SQL Server 2008 installation also included the installation of hotfix for XP KB942288-v3, the Windows Installer 4.5 Redistributable. System Configuration At this stage I took a moment to disable the SQL Server shared memory protocol and enable the Named Pipes and TCP/IP protocols.  These can be found in the SQL Server Configuration Manager > SQL Server Network Configuration > Protocols for MSSQLServer.  I also made sure that the DTC settings were configured correctley.   BizTalk Server 2009 The penultimate step is to install BizTalk Server 2009 Standard Edition. I had previsouly downloaded the redistributable prerequisites as a CAB file so was able to make use of this when carrying out the installation. When selecting which components to install I selected: Server Runtime BizTalk EDI/AS2 Runtime WCF Adapter Runtime Portal Components Administrative Tools WFC Administartion Tools Developer Tools and SDK, Enterprise SSO Administration Module Enterprise SSO Master Secret Server Business Rules Components BAM Alert Provider BAM Client BAM Eventing Once installation has completed clear the launch BizTalk Server Configuration check box and select finish. Verify the Installation Before configuring BizTalk Server it is a good idea to check that BizTalk Server 2009 is installed and that SQL Server 2008 has started correctly.  The easiest way to verify the BizTalk installation is check the Programs and Features in Control panel.  Check that SQL is started by looking in the SQL Server Configuration Manager. Configure BizTalk Server 2009 Finally we are ready to configure BizTalk Server 2009.  To start this I opted for a custom configuration that allowed me to choose in more detail the settings to be used. For all databases I selected the local server and default database names. For all Accounts I used a local account that had been created specifically for the BizTalk Services. For all windows groups I allowed the configuration wizard to create the default local groups. The configuration wizard then ran:   Upon completion you will be presented with a screen detailing the success or failure of the configuration.  If your configuration failed you will need to sort out the issues and try again (it is possible to save the configuration settings for later use if you want too - except passwords of course!).  If you see lots of nice green ticks - congratulations BizTalk Server 2009 on XP is now installed and configured ready for development.

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Man pages not finding entry

    - by Mike
    So, I'm not sure what is going on with my system (ubuntu 12.04), but my man pages do not seem to be working. I try man gcc and get the following response No manual entry for gcc See 'man 7 undocumented' for help when manual pages are not available. However I see the man entry in /usr/share/man/man1/gcc.1.gz Here is what my /etc/manpath.config file looks like # manpath.config # # This file is used by the man-db package to configure the man and cat paths. # It is also used to provide a manpath for those without one by examining # their PATH environment variable. For details see the manpath(5) man page. # # Lines beginning with `#' are comments and are ignored. Any combination of # tabs or spaces may be used as `whitespace' separators. # # There are three mappings allowed in this file: # -------------------------------------------------------- # MANDATORY_MANPATH manpath_element # MANPATH_MAP path_element manpath_element # MANDB_MAP global_manpath [relative_catpath] #--------------------------------------------------------- # every automatically generated MANPATH includes these fields # #MANDATORY_MANPATH /usr/src/pvm3/man # MANDATORY_MANPATH /usr/man MANDATORY_MANPATH /usr/share/man MANDATORY_MANPATH /usr/local/share/man #--------------------------------------------------------- # set up PATH to MANPATH mapping # ie. what man tree holds man pages for what binary directory. # # *PATH* -> *MANPATH* # MANPATH_MAP /bin /usr/share/man MANPATH_MAP /usr/bin /usr/share/man MANPATH_MAP /sbin /usr/share/man MANPATH_MAP /usr/sbin /usr/share/man MANPATH_MAP /usr/local/bin /usr/local/man MANPATH_MAP /usr/local/bin /usr/local/share/man MANPATH_MAP /usr/local/sbin /usr/local/man MANPATH_MAP /usr/local/sbin /usr/local/share/man MANPATH_MAP /usr/X11R6/bin /usr/X11R6/man MANPATH_MAP /usr/bin/X11 /usr/X11R6/man MANPATH_MAP /usr/games /usr/share/man MANPATH_MAP /opt/bin /opt/man MANPATH_MAP /opt/sbin /opt/man #--------------------------------------------------------- # For a manpath element to be treated as a system manpath (as most of those # above should normally be), it must be mentioned below. Each line may have # an optional extra string indicating the catpath associated with the # manpath. If no catpath string is used, the catpath will default to the # given manpath. # # You *must* provide all system manpaths, including manpaths for alternate # operating systems, locale specific manpaths, and combinations of both, if # they exist, otherwise the permissions of the user running man/mandb will # be used to manipulate the manual pages. Also, mandb will not initialise # the database cache for any manpaths not mentioned below unless explicitly # requested to do so. # # In a per-user configuration file, this directive only controls the # location of catpaths and the creation of database caches; it has no effect # on privileges. # # Any manpaths that are subdirectories of other manpaths must be mentioned # *before* the containing manpath. E.g. /usr/man/preformat must be listed # before /usr/man. # # *MANPATH* -> *CATPATH* # MANDB_MAP /usr/man /var/cache/man/fsstnd MANDB_MAP /usr/share/man /var/cache/man MANDB_MAP /usr/local/man /var/cache/man/oldlocal MANDB_MAP /usr/local/share/man /var/cache/man/local MANDB_MAP /usr/X11R6/man /var/cache/man/X11R6 MANDB_MAP /opt/man /var/cache/man/opt # #--------------------------------------------------------- # Program definitions. These are commented out by default as the value # of the definition is already the default. To change: uncomment a # definition and modify it. # #DEFINE pager pager -s #DEFINE cat cat #DEFINE tr tr '\255\267\264\327' '\055\157\047\170' #DEFINE grep grep #DEFINE troff groff -mandoc #DEFINE nroff nroff -mandoc #DEFINE eqn eqn #DEFINE neqn neqn #DEFINE tbl tbl #DEFINE col col #DEFINE vgrind vgrind #DEFINE refer refer #DEFINE grap grap #DEFINE pic pic -S # #DEFINE compressor gzip -c7 #--------------------------------------------------------- # Misc definitions: same as program definitions above. # #DEFINE whatis_grep_flags -i #DEFINE apropos_grep_flags -iEw #DEFINE apropos_regex_grep_flags -iE #--------------------------------------------------------- # Section names. Manual sections will be searched in the order listed here; # the default is 1, n, l, 8, 3, 0, 2, 5, 4, 9, 6, 7. Multiple SECTION # directives may be given for clarity, and will be concatenated together in # the expected way. # If a particular extension is not in this list (say, 1mh), it will be # displayed with the rest of the section it belongs to. The effect of this # is that you only need to explicitly list extensions if you want to force a # particular order. Sections with extensions should usually be adjacent to # their main section (e.g. "1 1mh 8 ..."). # SECTION 1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7 # #--------------------------------------------------------- # Range of terminal widths permitted when displaying cat pages. If the # terminal falls outside this range, cat pages will not be created (if # missing) or displayed. # #MINCATWIDTH 80 #MAXCATWIDTH 80 # # If CATWIDTH is set to a non-zero number, cat pages will always be # formatted for a terminal of the given width, regardless of the width of # the terminal actually being used. This should generally be within the # range set by MINCATWIDTH and MAXCATWIDTH. # #CATWIDTH 0 # #--------------------------------------------------------- # Flags. # NOCACHE keeps man from creating cat pages. #NOCACHE Thanks for any help (p.s. even 'man man' fails) Edit: When I run ls -l /usr/share/man/man1/gcc* I get the following output lrwxrwxrwx 1 root root 12 May 27 15:41 /usr/share/man/man1/gcc.1.gz -> gcc-4.6.1.gz -rw-r--r-- 1 root root 217776 Apr 15 17:34 /usr/share/man/man1/gcc-4.6.1.gz

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • SSAS Maestro Training in July 2012 #ssasmaestro #ssas

    - by Marco Russo (SQLBI)
    A few hours ago Chris Webb blogged about SSAS Maestro and I’d like to propagate the news, adding also some background info. SSAS Maestro is the premier certification on Analysis Services that selects the best experts in Analysis Services around the world. In 2011 Microsoft organized two rounds of training/exams for SSAS Maestros and up to now only 11 people from the first wave have been announced – around 10% of attendees of the course! In the next few days the new Maestros from the second round should be announced and this long process is caused by many factors that I’m going to explain. First, the course is just a step in the process. Before the course you receive a list of topics to study, including the slides of the course. During the course, students receive a lot of information that might not have been included in the slides and the best part of the course is class interaction. Students are expected to bring their experience to the table and comparing case studies, experiences and having long debates is an important part of the learning process. And it is also a part of the evaluation: good questions might be also more important than good answers! Finally, after the course, students have their homework and this may require one or two months to be completed. After that, a long (very long) evaluation process begins, taking into account homework, labs, participation… And for this reason the final evaluation may arrive months later after the course. We are going to improve and shorten this process with the next courses. The first wave of SSAS Maestro had been made by invitation only and now the program is opening, requiring a fee to participate in order to cover the cost of preparation, training and exam. The number of attendees will be limited and candidates will have to send their CV in order to be admitted to the course. Only experienced Analysis Services developers will be able to participate to this challenging program. So why you should do that? Well, only 10% of students passed the exam until now. So if you need 100% guarantee to pass the exam, you need to study a lot, before, during and after the course. But the course by itself is a precious opportunity to share experience, create networking and learn mission-critical enterprise-level best practices that it’s hard to find written on books. Oh, well, many existing white papers are a required reading *before* the course! The course is now 5 days long, and every day can be *very* long. We’ll have lectures and discussions in the morning and labs in the afternoon/evening. Plus some more lectures in one or two afternoons. A heavy part of the course is about performance optimization, capacity planning, monitoring. This edition will introduce also Tabular models, and don’t expect something you might find in the SSAS Tabular Workshop – only performance, scalability monitoring and optimization will be covered, knowing Analysis Services is a requirement just to be accepted! I and Chris Webb will be the teachers for this edition. The course is expensive. Applying for SSAS Maestro will cost around 7000€ plus taxes (reduced to 5000€ for students of a previous SSAS Maestro edition). And you will be locked in a training room for the large part of the week. So why you should do that? Well, as I said, this is a challenging course. You will not find the time to check your email – the content is just too much interesting to think you can be distracted by something else. Another good reason is that this course will take place in Italy. Well, the course will take place in the brand new Microsoft Innovation Campus, but in general we’ll be able to provide you hints to get great food and, if you are willing to attach one week-end to your trip, there are plenty of places to visit (and I’m not talking about the classic Rome-Florence-Venice) – you might really need to relax after such a week! Finally, the marking process after the course will be faster – we’d like to complete the evaluation within three months after the course, considering that 1-2 months might be required to complete the homework. If at this point you are not scared: registration will open in mid-April, but you can already write to [email protected] sending your CV/resume and a short description of your level of SSAS knowledge and experience. The selection process will start early and you may want to put your admission form on top of the FIFO queue!

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #001

    - by pinaldave
    I am introducing a new series today.  This series is called “Memory Lane.”  From the last six years and 2,300 articles, there are fantastic articles I keep revisiting.  Sometimes when I read old blog posts I think I should have included something or added a bit more to the topic.  But for many articles, I still feel they are fantastic (even after six years) and could be read again and again. I have also found that after six years of blogging, readers will write to me and say “Pinal, why don’t you write about X, Y or Z.”  The answer is: I already did!  It is here on the blog, or in the comments, or possibly in one of my books.  The solution has always been there, it is simply a matter of finding it and presenting it again.  That is why I have created Memory Lane.  I will be listing the best articles from the same week of the past six years.  You will find plenty of reading material every Saturday from articles of SQLAuthority past. Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Display Foreign Key Relationships and Name of the Constraint for Each Table in Database My blogging journey began with this blog post. As many of you know my journey began with creating a repository of my scripts. This was very first script which I had written to find out foreign key relationship and constraints. The same query was updated later on using the new SYS schema modification in SQL Server. Version 1: Using sys.schema Version 2: Using sys.schema and additional columns 2007 Milestone Posts – 1 Year (365 blogs) and 1 Million Views When I reached 1st week of Nov in 2007 SQLAuthority.com blog had around 365 blog posts and 1 Million Views. I was not obsessed with the statistics before but this was indeed an interesting moment for me as I was blogging for myself and did not realize that so many people are reading my blog. In year 2006 there were not many bloggers so blogging was new to me as well. I was learning it as I go. 2008 Stored Procedure WITH ENCRYPTION and Execution Plan If you have stored procedure and its code is encrypted when you execute it what will be displayed in the execution plan. There are two kinds of execution plans 1) Estimated and 2) Actual. It will be indeed interesting to know what is displayed in both the cases when Stored Procedure is encrypted. What is your guess? Now go ahead and click on here and figure out your answer. If the user is not able to login into SQL Server due to any error or issues there were two different blog post addresses the same issue here and here. 2009 It seems like Nov is the month of SQLPASS month. In 2009 on the same week I was in USA attending SQLPASS event. I had a fantastic experience attending the event. Here are the blog posts covering the subject Day 1, Day 2, Day 3, Day 4 2010 Finding the last backup time for all the databases This little script is very powerful and instantly gives details when was the last time your database backup performed. If you are reading this blog post – I say just go ahead and check if everything is alright on your server and you have all the necessary latest backup. It is better to be safe than sorrow. Version 1: Above script was improved to get more details about the database Version 2: This version of the script will include pretty much have all the backup related information in a single script. Do not miss to save it for future use. Are you a Database Administrator or a Database Developer? Three years ago I created a very small survey and the results which I have received are very interesting. The question was asking what is the profile of the visitor of that blog post and I noticed that DBA and Developers have balanced with little inclination towards Developers. Have you voted so far? If not, go ahead! 2011 New Book Released – SQL Server Interview Questions And Answers One year ago, on November 3, 2011 I published my book SQL Server Interview Questions and Answers.  The book has a lot of great reviews, and we have even received emails telling us this book was a life changer because it helped get them a great new job.  I don’t think anyone can get a job just from my book.  It was the individual who studied hard and took it seriously, and was determined to learn something new.  The book might have helped guide them and show them the topics to study, but they spent their own energy on it.  It was their own skills that helped them pass the exam. So, in this very first installment, I would like to thank the readers for accepting our book, for giving it great reviews and for using it and sharing it.  Our goal in writing this book was to help others, and it seems like we succeeded. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Stumbling Through: Making a case for the K2 Case Management Framework

    I have recently attended a three-day training session on K2s Case Management Framework (CMF), a free framework built on top of K2s blackpearl workflow product, and I have come away with several different impressions for some of the different aspects of the framework.  Before we get into the details, what is the Case Management Framework?  It is essentially a suite of tools that, when used together, solve many common workflow scenarios.  The tool has been developed over time by K2 consultants that have realized they tend to solve the same problems over and over for various clients, so they attempted to package all of those common solutions into one framework.  Most of these common problems involve workflow process that arent necessarily direct and would tend to be difficult to model.  Such solutions could be achieved in blackpearl alone, but the workflows would be complex and difficult to follow and maintain over time.  CMF attempts to simplify such scenarios not so much by black-boxing the workflow processes, but by providing different points of entry to the processes allowing them to be simpler, moving the complexity to a middle layer.  It is not a solution in and of itself, development is still required to tie the pieces together. CMF is under continuous development, both a plus and a minus in that bugs are fixed quickly and features added regularly, but it may be difficult to know which versions are the most stable.  CMF is not an officially supported K2 product, which means you will not get technical support but you will get access to the source code. The example given of a business process that would fit well into CMF is that of a file cabinet, where each folder in said file cabinet is a case that contains all of the data associated with one complaint/customer/incident/etc. and various users can access that case at any time and take one of a set of pre-determined actions on it.  When I was given that example, my first thought was that any workflow I have ever developed in the past could be made to fit this model there must be more than just this model to help decide if CMF is the right solution.  As the training went on, we learned that one of the key features of CMF is SharePoint integration as each case gets a SharePoint site created for it, and there are a number of excellent web parts that can be used to design a portal for users to get at all the information on their cases.  While CMF does not require SharePoint, without it you will be missing out on a huge portion of functionality that CMF offers.  My opinion is that without SharePoint integration, you may as well write your workflows and other components the old fashioned way. When I heard that each case gets its own SharePoint site created for it, warning bells immediately went off in my head as I felt that depending on the data load, a CMF enabled solution could quickly overwhelm SharePoint with thousands of sites so we have yet another deciding factor for CMF:  Just how many cases will your solution be creating?  While it is not necessary to use the site-per-case model, it is one of the more useful parts of the framework.  Without it, you are losing a big chunk of what CMF has to offer. When it comes to developing on top of the Case Management Framework, it becomes a matter of configuring what makes up a case, what can be done to a case, where each action on a case should take the user, and then typing up actions to case statuses.  This last step is one that I immediately warmed up to, as just about every workflow Ive designed in the past needed some sort of mapping table to set the status of a work item based on the action being taken definitely one of those common solutions that it is good to see rolled up into a re-useable entity (and it gets a nice configuration UI to boot!).  This concept is a little different than traditional workflow design, in that you dont have to think of an end-to-end process around passing a case along a path, rather, you must envision the case as central object with workflow threads branching off of it and doing their own thing with the case data.  Certainly there can be certain workflow threads that get rather complex, but the idea is that they RELATE to the case, they dont BECOME the case (though it is still possible with action->status mappings to prevent certain actions in certain cases, so it isnt always a wide-open free for all of actions on a case). I realize that this description of the Case Management Framework merely scratches the surface on what the product actually can do, and I dont think Ive conclusively defined for what sort of business scenario you can make a case for Case Management Framework.  What I do hope to have accomplished with this post is to raise awareness of CMF there is a (free!) product out there that could potentially simplify a tangled workflow process and give (for free!) a very useful set of SharePoint web parts and a nice set of (free!) reports.  The best way to see if it will truly fit your needs is to give it a try did I mention it is FREE?  Er, ok, so it is free, but only obtainable at this time for K2 partnersDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

    - by Andreea Vaduva
    This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class. The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid. If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available): So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”. This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”: After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange : This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid! So, the complete PeopleCode to create the Application Package is (with explanation in [….]) : ******Package CLASS_EXTENSIONS : [Name of the Package: CLASS_EXTENSIONS] --Beginning of the declaration part------------------------------------------------------------------------------ class EWK_ROWSET extends Rowset; [Definition Class EWK_ROWSET as a subclass of Class Rowset] method EWK_ROWSET(&RS As Rowset); [Constructor is the Method with the same name of the Class] property boolean Visible get set; property boolean Enabled get set; [Definition of the property “Enabled” in read/write] private [Before the word “private”, all the declarations are publics] method SetDisplay(&DisplaySW As boolean, &PropName As string, &ChildSW As boolean); instance boolean &EnSW; instance boolean &VisSW; instance Rowset &NextChildRS; instance Row &NextRow; instance Record &NextRec; instance Field &NextFld; instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt; instance integer &i, &j, &k; instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild; [For recursion] Constant &VisibleProperty = "VISIBLE"; Constant &EnabledProperty = "ENABLED"; end-class; --End of the declaration part------------------------------------------------------------------------------ method EWK_ROWSET [The Constructor] /+ &RS as Rowset +/ %Super = &RS; end-method; get Enabled /+ Returns Boolean +/; Return &EnSW; end-get; set Enabled /+ &NewValue as Boolean +/; &EnSW = &NewValue; %This.InsertEnabled=&EnSW; %This.DeleteEnabled=&EnSW; %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when you set this property] end-set; get Visible /+ Returns Boolean +/; Return &VisSW; end-get; set Visible /+ &NewValue as Boolean +/; &VisSW = &NewValue; %This.SetDisplay(&VisSW, &VisibleProperty, False); end-set; method SetDisplay [The most important PeopleCode Method] /+ &DisplaySW as Boolean, +/ /+ &PropName as String, +/ /+ &ChildSW as Boolean +/ [Not used in our example] &RowCnt = %This.ActiveRowCount; &NextRow = %This.GetRow(1); [To know the structure of a line ] &RecCnt = &NextRow.RecordCount; For &i = 1 To &RowCnt [Loop for each Line] &NextRow = %This.GetRow(&i); For &j = 1 To &RecCnt [Loop for each Record] &NextRec = &NextRow.GetRecord(&j); &FldCnt = &NextRec.FieldCount; For &k = 1 To &FldCnt [Loop for each Field/Record] &NextFld = &NextRec.GetField(&k); Evaluate Upper(&PropName) When = &VisibleProperty &NextFld.Visible = &DisplaySW; Break; When = &EnabledProperty; &NextFld.Enabled = &DisplaySW; [Enable each Field/Record] Break; When-Other Error "Invalid display property; Must be either VISIBLE or ENABLED" End-Evaluate; End-For; End-For; If &ChildSW = True Then [If recursion] &ChildRSCnt = &NextRow.ChildCount; For &j = 1 To &ChildRSCnt [Loop for each Rowset child] &NextChildRS = &NextRow.GetRowset(&j); &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS); &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW); [For each Rowset child, call Method SetDisplay with the same parameters used with the Rowset parent] End-For; End-If; End-For; end-method; ******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS] About the Author: Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses: For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools For Interface Users: Integration Broker (Web Service)

    Read the article

  • Interview with Ronald Bradford about MySQL Connect

    - by Keith Larson
    Ronald Bradford,  an Oracle ACE Director has been busy working with  database consulting, book writing (EffectiveMySQL) while traveling and speaking around the world in support of MySQL. I was able to take some of his time to get an interview on this thoughts about theMySQL Connect conference. Keith Larson: What where your thoughts when you heard that Oracle was going to provide the community the MySQL Conference ?Ronald Bradford: Oracle has already been providing various different local community events including OTN Tech Days and  MySQL community days. These are great for local regions both in the US and abroad.  In previous years there has been an increase of content at Oracle Open World, however that benefits the Oracle community far more then the MySQL community.  It is good to see that Oracle is realizing the benefit in providing a large scale dedicated event for the MySQL community that includes speakers from the MySQL development teams, invested companies in the ecosystem and other community evangelists.I fully expect a successful event and look forward to hopefully seeing MySQL Connect at the upcoming Brazil and Japan OOW conferences and perhaps an event on the East Coast.Keith Larson: Since you are part of the content committee, what did you think of the submissions that were received during call for papers?Ronald Bradford: There was a large number of quality submissions to the number of available presentation sessions. As with the previous years as a committee member for the annual MySQL conference, there is always a large variety of common cornerstone MySQL features as well as new products and upcoming companies sharing their MySQL experiences. All of the usual major players in the ecosystem will in presenting at MySQL Connect including Facebook, Twitter, Yahoo, Continuent, Percona, Tokutek, Sphinx and Amazon to name a few.  This is ensuring the event will have a large number of quality speakers and a difficult time in choosing what to attend. Keith Larson: What sessions do you look forwarding to attending? Ronald Bradford: As with most quality conferences you can only be in one place at one time, so with multiple tracks per session it is always difficult to decide. The continued work and success with MySQL Cluster, and with a number of sessions I am sure will be popular. The features that interest me the most are around the optimizer, where there are several sessions on new features, and on the importance of backups. There are three presentations in this area to choose from.Keith Larson: Are you going to cover any of the content in your books at your MySQL Connect sessions?Ronald Bradford: I will be giving two presentations at MySQL Connect. The first will include the techniques available for creating better indexes where I will be touching on some aspects of the first Effective MySQL book on Optimizing SQL Statements.  In my second presentation from experiences of managing 500+ AWS MySQL instances, I will be touching on areas including SQL tuning, backup and recovery and scale out with replication.   These are the key topics of the initial books in the Effective MySQL series that focus on performance, scalability and business continuity.  The books however cover a far greater amount of detail then can be presented in a 1 hour session. Keith Larson: What features of MySQL 5.6 do you look forward to the most ?Ronald Bradford: I am very impressed with the optimizer trace feature. The ability to see exposed information is invaluable not just for MySQL 5.6, but to also apply information discerned for optimizing SQL statements in earlier versions of MySQL.  Not everybody understands that it is easy to deploy a MySQL 5.6 slave into an existing topology running an older version if MySQL for evaluation of many new features.  You can use the new mysqlbinlog streaming feature for duplicating master binary logs on an older version with a MySQL 5.6 slave.  The improvements in instrumentation in the Performance Schema are exciting.   However, as with my upcoming Replication Techniques in Depth title, that will be available for sale at MySQL Connect, there are numerous replication features, some long overdue with provide significant management benefits. Crash Save Slaves, Global transaction Identifiers (GTID)  and checksums just to mention a few.Keith Larson: You have been to numerous conferences, what would you recommend for people at the conference? Ronald Bradford: Make the time to meet and introduce yourself to the speakers that cover the topics that most interest you. The MySQL ecosystem has a very strong community.  The relationships you build with presenters, developers and architects in MySQL can be invaluable, however they are created over time. Get to know these people, interact with them over time.  This is the opportunity to learn more then just the content from a 1 hour session. Keith Larson: Any additional tips to handling the long hours ? Ronald Bradford: Conferences can be hard, especially with all the post event drinking.  This is a two day event and I am sure will include additional events on Friday and Saturday night so come well prepared, and leave work behind. Take the time to learn something new.   You can always catchup on sleep later. Keith Larson: Thank you so much for taking some time to do this I look forward to seeing you at the MySQL Connect conference.  Please stay tuned here for more updates on MySQL. 

    Read the article

  • Upcoming User Group Events in 2011

    - by john.orourke(at)oracle.com
    At a recent customer event, someone asked me if Oracle had any plans to re-create the Hyperion Solutions Conference.  Unfortunately the answer is no.  With so many different product lines it would be challenging and costly for Oracle to run separate user conferences for every product line, and it would create too many events for customers with multiple products to attend.  So Oracle Open World is the company's main event for showcasing what's new and what's coming across all product lines.  If customers find Oracle OpenWorld too overwhelming or if the timing is bad, there are a number of other conferences, which are run by Oracle user groups and include a number of sessions focused on Oracle Hyperion EPM and BI products.  Here's a sneak preview of what's coming up for conferences in 2011 where you can network with other Hyperion users and learn what's new and what's coming in our products. Alliance 2011:  This conference is run by the Oracle Higher Education User Group (HEUG).  It's being held March 27 - 30th in lovely Denver, Colorado.  (a great location and time for skiers!)  This event is targeted at customers in Higher Education and Public Sector organizations and is expecting to draw over 3,500 attendees.  There will be a number of sessions focusing on Oracle Hyperion EPM and BI products in the Budgeting track, as well as the Reporting & BI track.  This includes product-focused sessions delivered by Oracle and partners, as well as case studies delivered by customers.  Here's a link to the registration page where you can get more information: http://www.heug.org/p/cm/ld/fid=255 Collaborate 2011:  This conference is run by three different user groups;  OAUG, IOUG and Quest.  It's being held April 10 - 14th in sunny Orlando, Florida.  (yes, sunshine and warmth!)  This event is targeted to customers with Oracle E-Business Suite, PeopleSoft, JD Edwards, Hyperion, Primavera and other products and is expected to draw over 5,000 attendees.  You'll find a number of sessions focused on Oracle Hyperion EPM and BI products in the BI/Data Warehousing/EPM track.  This includes product-focused sessions delivered by Oracle, our partners, and customers as well as a number of customer case studies.  There will also be an exhibit area with a number of demo pods focused on EPM and BI products.  Here's a link to the conference web site where you can get more information: http://collaborate.oaug.org/ Also, please note that the OAUG has a Hyperion SIG that runs focused EPM/Hyperion events throughout the year.  Here's a link to their web site where you can get more information: http://hyperionsig.oaug.org/ Kscope 2011:  Formerly the Kaleidoscope conference, this one is run by the Oracle Developer Tools User Group (ODTUG).  This conference is being held June 26 - 30th in Long Beach, CA. (surf's up!)  Historically, this event has focused on Oracle Development tools, but over the past few years the EPM and BI content has grown with over 100 sessions planned this year.  So this event is becoming a great venue for existing Hyperion customers to learn about the latest developments with Oracle Essbase, Hyperion Planning, Hyperion Financial Management, Oracle BI and other products.   You'll also find hands-on workshops, product demonstrations as well as EPM and BI Symposiums run by Oracle Development staff.  Here's a link to the web site where you can get more details.  http://www.kscope11.com/biepm UKOUG Conference Series:  EPM and Hyperion 2011:  For Hyperion customers in the UK, the UKOUG has a Hyperion SIG that runs a focused conference for EPM and Hyperion products.  The 2011 event is planned for June in London.  Here's a link to the web site for this event where you can get more information: http://hyperion.ukoug.org/default.asp?p=8461 In addition to these conferences, you can also find Oracle EPM and BI content at regional user group meetings globally as well as Marketing events run by Oracle.  Check the events page at www.oracle.com for the details on upcoming Marketing and regional User Group events.  So while Oracle will not be trying to replicate the Hyperion Solutions conference, the good news is that there are a number of other events available where customers can find out what's new and what's coming with Oracle EPM and BI products.  And these events are running at different times of the year in different locations - so you can pick the event that makes the most sense for your company from a timing and location standpoint. I'll be delivering a number of sessions at the Alliance and Collaborate conferences and hope to see many of our loyal customers and partners at these events.  And there's always Oracle OpenWorld coming up in October, for which the planning has already started.  I look forward to seeing you in 2011.

    Read the article

  • 2D Selective Gaussian Blur

    - by Joshua Thomas
    I am attempting to use Gaussian blur on a 2D platform game, selectively blurring specific types of platforms with different amounts. I am currently just messing around with simple test code, trying to get it to work correctly. What I need to eventually do is create three separate render targets, leave one normal, blur one slightly, and blur the last heavily, then recombine on the screen. Where I am now is I have successfully drawn into a new render target and performed the gaussian blur on it, but when I draw it back to the screen everything is purple aside from the platforms I drew to the target. This is my .fx file: #define RADIUS 7 #define KERNEL_SIZE (RADIUS * 2 + 1) //----------------------------------------------------------------------------- // Globals. //----------------------------------------------------------------------------- float weights[KERNEL_SIZE]; float2 offsets[KERNEL_SIZE]; //----------------------------------------------------------------------------- // Textures. //----------------------------------------------------------------------------- texture colorMapTexture; sampler2D colorMap = sampler_state { Texture = <colorMapTexture>; MipFilter = Linear; MinFilter = Linear; MagFilter = Linear; }; //----------------------------------------------------------------------------- // Pixel Shaders. //----------------------------------------------------------------------------- float4 PS_GaussianBlur(float2 texCoord : TEXCOORD) : COLOR0 { float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f); for (int i = 0; i < KERNEL_SIZE; ++i) color += tex2D(colorMap, texCoord + offsets[i]) * weights[i]; return color; } //----------------------------------------------------------------------------- // Techniques. //----------------------------------------------------------------------------- technique GaussianBlur { pass { PixelShader = compile ps_2_0 PS_GaussianBlur(); } } This is the code I'm using for the gaussian blur: public Texture2D PerformGaussianBlur(Texture2D srcTexture, RenderTarget2D renderTarget1, RenderTarget2D renderTarget2, SpriteBatch spriteBatch) { if (effect == null) throw new InvalidOperationException("GaussianBlur.fx effect not loaded."); Texture2D outputTexture = null; Rectangle srcRect = new Rectangle(0, 0, srcTexture.Width, srcTexture.Height); Rectangle destRect1 = new Rectangle(0, 0, renderTarget1.Width, renderTarget1.Height); Rectangle destRect2 = new Rectangle(0, 0, renderTarget2.Width, renderTarget2.Height); // Perform horizontal Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget1); effect.CurrentTechnique = effect.Techniques["GaussianBlur"]; effect.Parameters["weights"].SetValue(kernel); effect.Parameters["colorMapTexture"].SetValue(srcTexture); effect.Parameters["offsets"].SetValue(offsetsHoriz); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(srcTexture, destRect1, Color.White); spriteBatch.End(); // Perform vertical Gaussian blur. game.GraphicsDevice.SetRenderTarget(renderTarget2); outputTexture = (Texture2D)renderTarget1; effect.Parameters["colorMapTexture"].SetValue(outputTexture); effect.Parameters["offsets"].SetValue(offsetsVert); spriteBatch.Begin(0, BlendState.Opaque, null, null, null, effect); spriteBatch.Draw(outputTexture, destRect2, Color.White); spriteBatch.End(); // Return the Gaussian blurred texture. game.GraphicsDevice.SetRenderTarget(null); outputTexture = (Texture2D)renderTarget2; return outputTexture; } And this is the draw method affected: public void Draw(SpriteBatch spriteBatch) { device.SetRenderTarget(maxBlur); spriteBatch.Begin(); foreach (Brick brick in blueBricks) brick.Draw(spriteBatch); spriteBatch.End(); blue = gBlur.PerformGaussianBlur((Texture2D) maxBlur, helperTarget, maxBlur, spriteBatch); spriteBatch.Begin(); device.SetRenderTarget(null); foreach (Brick brick in redBricks) brick.Draw(spriteBatch); foreach (Brick brick in greenBricks) brick.Draw(spriteBatch); spriteBatch.Draw(blue, new Rectangle(0, 0, blue.Width, blue.Height), Color.White); foreach (Brick brick in purpleBricks) brick.Draw(spriteBatch); spriteBatch.End(); } I'm sorry about the massive brick of text and images(or not....new user, I tried, it said no), but I wanted to get my problem across clearly as I have been searching for an answer to this for quite a while now. As a side note, I have seen the bloom sample. Very well commented, but overly complicated since it deals in 3D; I was unable to take what I needed to learn form it. Thanks for any and all help.

    Read the article

  • Employee Engagement Q&A with John Brunswick

    - by Kellsey Ruppel
    As we are focusing this week on Employee Engagement, I recently sat down with industry expert and thought leader John Brunswick on the topic. Here is the Q&A dialogue we shared.  Q: How do you effectively engage employees to drive business value?A: Motivation, both extrinsic and intrinsic, combined with the relevancy of various channels to support it.  Beyond chaining business strategies like compensation models within an organization, engagement ultimately is most successful when driven by employee's motivations.  Business value derived from engagement through technical capabilities can be objectively measured through metrics like the rate and accuracy of problem solving for a given business function or frequency of innovation created.  Providing employees performing "knowledge work" with capabilities that allow them to perform work with a higher degree of accuracy in the same or ideally less time, adds value for that individual and in turn, drives their level of engagement to drive business value. Q: Organizations with high levels of employee engagement outperform the total stock market index by 22%. Can you comment on why you think this might be? A: Alignment through shared purpose.  Zappos is an excellent example of a culture that arguably has higher than average levels of employee engagement and it permeates every aspect of their organization – embodied externally through their customer experience.  I recently made my first purchase with them and it was obvious through their web experience, visual design, communication style, customer service and attention to detail down to green packaging, that they have an amazingly strong shared purpose.  The Zappos.com ‘About page’ outlines their "Family Core Values", the first three being "Deliver WOW Through Service, Embrace and Drive Change & Create Fun and A Little Weirdness" – all reflected externally in my interaction with them.  Strong shared purpose enables higher product and service experience, equating to a dedicated customer base, repeat purchases and expanded marketshare. Q: Have you seen any trends in the market regarding employee engagement? A: Some companies now see offering a form of social engagement similar to Facebook and LinkedIn as standard communication infrastructure like email or instant messaging.  Originally offered as standalone tools, the value is now seen when these capabilities are offered in an integrated fashion in the context of business entities.  An emerging area of focus is around employee activities related to their organization on external social platforms, implicitly creating external communities with employees acting on behalf of the brand and interacting with each other (e.g. Twitter).  Companies have reached a formal understand that this now established communication medium requires strategies allowing employees to engage.  I have personally met colleagues from Oracle, like Oracle User Experience Director Ultan O'Broin (@ultan), via Twitter before meeting first through internal channels. Q: Employee engagement is important, but what about engaging customers and partners? A: The last few years we have witnessed an interesting evolution from the novelty of self-service to expectations of "intelligent" self-service.  From a consumer standpoint, engagement can end up being a key differentiator, especially in mature markets.  Customers that perform some level of interaction with a brand develop greater affinity for the brand and have a greater probability of acting as an advocate.  As organizations move toward a model of deeper engagement, they must ensure that their business is positioned to support deeper relationships, offering potentially greater transparency. From a partner standpoint greater engagement can lead to new types of business opportunities, much in the way that Amazon.com offers a unified shopping experience that can potentially span various vendors.  This same model can be extended to blending services and product delivery models, based on a closeness not easily possible before increased capability of engagement mechanisms. Q: What types of solutions are available to successfully deliver employee engagement? A: Solutions enabling higher levels of engagement do so on the basis of relevancy.  This relevancy is generally supported by aspects of content management, social collaboration, business intelligence, portal and process management technologies.  These technologies can help deliver an experience tailored to a given role or process within an organization that applies equally to work that is structured or unstructured, appearing in the form of functionality as simple as an online employee directory search, knowledge communities supported by social collaboration, as well as more feature rich business intelligence dashboards and portals. Looking to learn more about how to effectively engage your employees? Check out this webcast, or read more from John Brunswick. 

    Read the article

  • Is it Hard to Write a Blog?

    - by Joe Mayo
    Responding to a tweet I received, asking if I found it hard to write a blog and keep it interesting. This is one of the situations where a 140 character response doesn’t do a question justice. There’s a lot to think about between the subjects of writing, subject matter, and entertainment.  Here’s my take on each of these three topics: There’s all types of writing you can do with various degrees of difficulty. If you’re writing a book and you have a gazillion editors bleeding over your every utterance, then the task becomes harder because you’re second-guessing yourself, not knowing whose opinion will be violated. However, if you’re communicating in a public forum, not too many people care about the grammar as much as whether what you have to say is correct.  For a blog, I would say it’s somewhere in-between.  Right now, I’m using Windows Live Writer, which gives me a few advantages to just typing into the blog editor, such as spelling correction and the ability to save my work and resume later.  Overall, writing is one of those things that you just need to get used to.  It’s an essential skill for developers because you need to document your work, depending on what your definition of proper documentation is, and communicate with other developers via various communications mediums. Not begin good (or not thinking that you’re good) shouldn’t hold you back.  Like most things in life, practice will improve your skill.  So, push away that inner voice that keeps you from moving forward and just do it. A good grasp on the subject matter you’re writing about helps.  However, don’t let a lack of knowledge stop you from writing about something. I recall reading something a while back by a developer who didn’t know a technology but wrote about their experience in learning it. They ended up learning more by expressing their thoughts in writing. If you look around out many blogs today, there are many items written by developers learning what they’re writing about.  So, whether you are sure or unsure, you can still write – just be honest with yourself and your readers about what you’re writing. Also, don’t be afraid to have a different opinion or worry if someone will disagree.  I’ll freely admit that it took a while for me to become accustomed to being criticized. Take the good with the bad and use the bad to make yourself better. Guaranteed, someone will disagree with one or more parts of what I’ve written here or think they have a better approach. No problem, more power to them, and whatever constructive comments they have will be a benefit to me in the future; Otherwise, to h*ll with them. :)  Every time you get knocked down, get right back up, dust the dirt off your backside, and keep moving forward.  You’ll learn in time how to align a subject with your own presentation of the material. Entertainment could be hard or could be natural, depending on the personality of yourself and your target audience. It’s even more challenging because you can say something you think is funny and someone will be offended. In fact, there are a lot of things that you shouldn’t say in the name of a joke, but I won’t mention any of them here for want of not offending anyone. Of course, I probably offended someone by saying that and there is probably an organization somewhere in the world out to get me now. I’m probably not the best person to be giving you advice on entertaining an audience.  I mean, every time I try to tell a joke on Twitter 10 people unfriend me. Okay, maybe 15, but you get my point. One thing you might be interested in knowing is that it’s not too hard for one technical person to entertain other technical people, especially when the subject is of interest.  It’s the excitement in each sentence and passion in each paragraph that will keep another developer entertained and interested in what you have to say. Not everyone will like what you’ve written, but the important part is to find your own voice and it’s likely that there is one person in some corner of the world that likes what you have to say, even if it’s your mom and she doesn’t understand a single word you write. :)   If I could leave you with one final thought; Just do it and don’t let anyone or anything hold you back.   Joe

    Read the article

  • Whoosh: PASS Board Year 1, Q4

    - by Denise McInerney
    "Whoosh". That's the sound the last quarter of 2012 made as it rushed by. My first year on the PASS Board is complete, and the last three months of it were probably the busiest. PASS Summit 2012 Much of October was devoted to preparing for Summit. Every Board  member, HQ staffer and dozens of volunteers were busy in the run-up to our flagship event. It takes a lot of work to put on the Summit. The community meetings,  first-timers program, keynotes, sessions and that fabulous Community Appreciation party are the result of many hours of preparation. Virtual Chapters at the Summit With a lot of help from Karla Landrum, Michelle Nalliah, Lana Montgomery and others at HQ the VCs had a good presence at Summit. We started the week with a VC leaders meeting. I shared some information about the activities and growth during the first part of the year.   From January - September 2012: The number of VCs increased from 14 to 20 VC membership  grew from 55,200 to 80,100 Total attendance at VC meetings increased from 1,480 to 2,198 Been part of PASS Global Growth with language-based VC- including Chinese, Spanish and Portuguese. We also heard from some VC leaders and volunteers. Ryan Adams (Performance VC) shared his tips for successful marketing of VC events. Amy Lewis (Business Intelligence VC) described how the BI chapter has expanded to support PASS' global growth by finding volunteers to organize events at times that are convenient for people in Europe and Australia. Felipe Ferreira (Portuguese language VC) described the experience of building a user group first in Brazil, then expanding to work with Portuguese-speaking data professionals around the world. Virtual Chapter leaders and volunteers were in evidence throughout Summit, beginning with the Welcome Reception. For the past several years VCs have had an organized presence at this event, signing up new members and advertising their meetings. Many VC leaders also spent time at the Community Zone. This new addition to the Summit proved to be a vibrant spot were new members and volunteers could network with others and find out how to start a chapter or host a SQL Saturday. Women In Technology 2012 was the 10th WIT Luncheon to be held at Summit. I was honored to be asked to be on the panel to discuss the topic "Where Have We Been and Where are We Going?" The PASS community has come a long way in our understanding of issues facing women in tech and our support of women in the organization. It was great to hear from panelists Stefanie Higgins and Kevin Kline who were there at the beginning as well as Kendra Little and Jen Stirrup who are part of the progress being made by women in our community today. Bylaw Changes The Board spent a good deal of time in 2012 discussing how to move our global growth initiatives forward. An important component of this is a proposed change to how the Board is elected with some seats representing geographic regions. At the end of December we voted on these proposed bylaw changes which have been published for review. The member review and feedback is open until February 8. I encourage all members to review these changes and send any feedback to [email protected]  In addition to reading the bylaws, I recommend reading Bill Graziano's blog post on the subject. Business Analytics Conference At Summit we announced a new event: the PASS Business Analytics Conference. The inaugural event will be April 10-12, 2013 in Chicago. The world of data is changing rapidly. More and more businesses want to extract value and insight from their data. Data professionals who provide these insights or enable others to do so are in demand. The BA Conference offers expert content on predictive analytics, data exploration and visualization, content delivery strategies and more. By holding this new event PASS is participating in important discussions happening in our industry, offering our members more educational value and reaching out to data professionals who are not currently part of our organization. New Year, New Portfolio In addition to my work with the Virtual Chapters I am also now responsible for the 24 Hours of PASS portfolio. Since the first 24HOP of 2013 is scheduled for January 30 we started the transition of the portfolio work from Rob Farley to me right after Summit. Work immediately started to secure speakers for the January event. We have also been evaluating webinar platforms that can be used for 24HOP as well as the Virtual Chapters. Next Up 24 Hours of PASS: Business Analytics Edition will be held on January 30. I'll be there and will moderate one or two sessions. The 24HOP topics are a sneak peek into the type of content that will be offered at the Business Analytics Conference. I hope to see some of you there. The Virtual Chapters have hit the ground running in 2013; many of them have events scheduled. The Application Development VC is getting restarted  and a new Business Analytics VC will be starting soon. Check out the lineup and join the VCs that interest you. And watch the Events page and Connector for announcements of upcoming meetings. At the end of January I will be attending a Board meeting in Seattle, and February 23 I will be at SQL Saturday #177 in Silicon Valley.

    Read the article

  • EPM and Business Analytics Talking-head Videos from Oracle OpenWorld 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Here is a selection of 2 to 3 minute video interviews at this year’s Oracle OpenWorld: 1. George Somogyi, Solutions Architect, New Edge Group, talks about the importance of having their integrated Oracle Hyperion Platform consisting of Oracle Hyperion Financial Management, Oracle Hyperion Financial Data Quality Management, Oracle E-Business Suite R12 and Oracle Business Intelligence Extended Edition plus their use of Oracle Managed Cloud Services. Speaker: George Somogyi @ http://youtu.be/kWn0dQxCUy8 2. Gregg Thompson, Director of Financial Systems for ADT, talks about using Oracle Data Relationship Management prior to implementing an Enterprise Performance Management solution. Gregg confirmed that there are big benefits to bringing the full Oracle Hyperion Financial Close suite online with Oracle DRM as the metadata source. Reduced maintenance time and use of external consultants translates into significant time and cost savings and faster implementation times. Speaker: Gregg Thompson @ http://youtu.be/XnFrR9Uk4xk 3. Jeff Spangler, Director Financial Planning and Analysis for Speedy Cash Holdings Corp, talked to us about the benefits achieved through implementing Oracle Hyperion Planning and financial reporting solutions. He also describes how the use of Data Relationship Management will keep the process running smoothly now and in the future. Speaker: Jeff Spangler @ http://youtu.be/kkkuMkgJ22U 4. Marc Seewald, Senior Director of Product Management for Oracle Hyperion Tax Provision at Oracle, talks about Oracle Hyperion Tax Provision, how it is an integral part of the financial close process and that it provides better internal controls and automation of this task. Marc talks about Oracle Partners and customers alike who are seeing great value. Speaker: Marc Seewald @ http://youtu.be/lM_nfvACGuA 5. Matt Bradley, SVP of Product Development for Enterprise Performance Management (EPM) Applications at Oracle, talked to us about different deployment options for Oracle EPM. Cloud services (SaaS), managed services, on-premise, off-premise all have their merits, and organizations need flexibility to easily move between them as their companies evolve. Speaker: Matt Bradley @ http://youtu.be/ATO7Z9dbE-o 6. Neil Sellers, Partner, Qubix International talks about their experience with previewing Oracle’s new Planning and Budgeting Cloud Service. He describes the benefits of the step-by-step task lists, the speed of getting the application up and running, and the huge benefits of not having to manage the software and hardware side of the planning process. Speaker: Neil Sellers @ http://youtu.be/xmosO28e4_I 7. Praveen Pasupuleti, Senior Business Intelligence Development Manager of Citrix Systems Inc., talks about their Oracle Hyperion Planning upgrade and the huge performance improvement now experienced in forecasting. He also talked about the benefits of Oracle Hyperion Workforce Planning achieved by Citrix. Speaker: Praveen Pasupuleti @ http://youtu.be/d1e_4hLqw8c 8. CheckPoint Consulting, talked to us about how Enterprise Performance Management should be viewed as an entire solution, rather than as a bunch of applications in silos, to provide significant benefits; and how Data Relationship Management can tie it all together effectively. Speaker: Ron Dimon @ http://youtu.be/sRwbdbbXvUE 9. Sonal Kulkarni, Enterprise Performance Management Leader, Cummins Inc., talks about their use of Oracle Hyperion Financial Close Management (Account Reconciliation Manager), Oracle Hyperion Financial Management and Oracle Hyperion Financial Data Quality Management and how this is providing efficiency, visibility and compliance benefits. Speaker: Sonal Kulkarni @ http://youtu.be/OEgup5dKyVc 10. Todd Renard, Manager Financial Planning and Business Analytics for B/E Aerospace Inc., talks about the huge benefits that B/E Aerospace is experiencing from Oracle Financial Close Suite. He was extremely excited about Oracle Hyperion Financial Data Quality Management and how this helps them integrate a new business in as little as three weeks. Speaker: Todd Renard @ http://youtu.be/nIfqK46uVI8 11. Peter Smolianski, Chief Technology Officer for the District of Columbia Courts, talked to us about how D.C. Courts is using Oracle Scorecard and Strategy Management to push their 5 year plan forward, to report results to their constituents, and take accountability for process changes to become more efficient. Speaker: Peter Smolianski @ http://www.youtube.com/watch?v=T-DtB5pl-uk 12. Rich Wilkie, Senior Director of Product Management for Financial Close Suite at Oracle, talked to us about Oracle Financial Management Analytics. He told us how the prebuilt dashboards on top of Oracle Hyperion Financial Close Suite make it easy for everyone to see the numbers and understand where they are in the close process, and if there is an issue, they can see where it is. Executives are excited to get this information on mobile devices too. Speaker: Rich Wilkie @ http://www.youtube.com/watch?v=4UHuHgx74Yg 13. Dinesh Balebail, Senior Director of Software Development for Oracle Hyperion Profitability and Cost Management, talked to us about the power and speed of Oracle Hyperion Profitability and Cost Management and how it is being used to do deep costing for Telecoms, Hospitals, Banks and other high transaction volume organizations effectively. Speaker: Dinesh Balebail @ http://youtu.be/ivx5AZCXAfs /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;}

    Read the article

  • Extreme Optimization – Numerical Algorithm Support

    - by JoshReuben
    Function Delegates Many calculations involve the repeated evaluation of one or more user-supplied functions eg Numerical integration. The EO MathLib provides delegate types for common function signatures and the FunctionFactory class can generate new delegates from existing ones. RealFunction delegate - takes one Double parameter – can encapsulate most of the static methods of the System.Math class, as well as the classes in the Extreme.Mathematics.SpecialFunctions namespace: var sin = new RealFunction(Math.Sin); var result = sin(1); BivariateRealFunction delegate - takes two Double parameters: var atan2 = new BivariateRealFunction (Math.Atan2); var result = atan2(1, 2); TrivariateRealFunction delegate – represents a function takes three Double arguments ParameterizedRealFunction delegate - represents a function taking one Integer and one Double argument that returns a real number. The Pow method implements such a function, but the arguments need order re-arrangement: static double Power(int exponent, double x) { return ElementaryFunctions.Pow(x, exponent); } ... var power = new ParameterizedRealFunction(Power); var result = power(6, 3.2); A ComplexFunction delegate - represents a function that takes an Extreme.Mathematics.DoubleComplex argument and also returns a complex number. MultivariateRealFunction delegate - represents a function that takes an Extreme.Mathematics.LinearAlgebra.Vector argument and returns a real number. MultivariateVectorFunction delegate - represents a function that takes a Vector argument and returns a Vector. FastMultivariateVectorFunction delegate - represents a function that takes an input Vector argument and an output Matrix argument – avoiding object construction  The FunctionFactory class RealFromBivariateRealFunction and RealFromParameterizedRealFunction helper methods - transform BivariateRealFunction or a ParameterizedRealFunction into a RealFunction delegate by fixing one of the arguments, and treating this as a new function of a single argument. var tenthPower = FunctionFactory.RealFromParameterizedRealFunction(power, 10); var result = tenthPower(x); Note: There is no direct way to do this programmatically in C# - in F# you have partial value functions where you supply a subset of the arguments (as a travelling closure) that the function expects. When you omit arguments, F# generates a new function that holds onto/remembers the arguments you passed in and "waits" for the other parameters to be supplied. let sumVals x y = x + y     let sumX = sumVals 10     // Note: no 2nd param supplied.     // sumX is a new function generated from partially applied sumVals.     // ie "sumX is a partial application of sumVals." let sum = sumX 20     // Invokes sumX, passing in expected int (parameter y from original)  val sumVals : int -> int -> int val sumX : (int -> int) val sum : int = 30 RealFunctionsToVectorFunction and RealFunctionsToFastVectorFunction helper methods - combines an array of delegates returning a real number or a vector into vector or matrix functions. The resulting vector function returns a vector whose components are the function values of the delegates in the array. var funcVector = FunctionFactory.RealFunctionsToVectorFunction(     new MultivariateRealFunction(myFunc1),     new MultivariateRealFunction(myFunc2));  The IterativeAlgorithm<T> abstract base class Iterative algorithms are common in numerical computing - a method is executed repeatedly until a certain condition is reached, approximating the result of a calculation with increasing accuracy until a certain threshold is reached. If the desired accuracy is achieved, the algorithm is said to converge. This base class is derived by many classes in the Extreme.Mathematics.EquationSolvers and Extreme.Mathematics.Optimization namespaces, as well as the ManagedIterativeAlgorithm class which contains a driver method that manages the iteration process.  The ConvergenceTest abstract base class This class is used to specify algorithm Termination , convergence and results - calculates an estimate for the error, and signals termination of the algorithm when the error is below a specified tolerance. Termination Criteria - specify the success condition as the difference between some quantity and its actual value is within a certain tolerance – 2 ways: absolute error - difference between the result and the actual value. relative error is the difference between the result and the actual value relative to the size of the result. Tolerance property - specify trade-off between accuracy and execution time. The lower the tolerance, the longer it will take for the algorithm to obtain a result within that tolerance. Most algorithms in the EO NumLib have a default value of MachineConstants.SqrtEpsilon - gives slightly less than 8 digits of accuracy. ConvergenceCriterion property - specify under what condition the algorithm is assumed to converge. Using the ConvergenceCriterion enum: WithinAbsoluteTolerance / WithinRelativeTolerance / WithinAnyTolerance / NumberOfIterations Active property - selectively ignore certain convergence tests Error property - returns the estimated error after a run MaxIterations / MaxEvaluations properties - Other Termination Criteria - If the algorithm cannot achieve the desired accuracy, the algorithm still has to end – according to an absolute boundary. Status property - indicates how the algorithm terminated - the AlgorithmStatus enum values:NoResult / Busy / Converged (ended normally - The desired accuracy has been achieved) / IterationLimitExceeded / EvaluationLimitExceeded / RoundOffError / BadFunction / Divergent / ConvergedToFalseSolution. After the iteration terminates, the Status should be inspected to verify that the algorithm terminated normally. Alternatively, you can set the ThrowExceptionOnFailure to true. Result property - returns the result of the algorithm. This property contains the best available estimate, even if the desired accuracy was not obtained. IterationsNeeded / EvaluationsNeeded properties - returns the number of iterations required to obtain the result, number of function evaluations.  Concrete Types of Convergence Test classes SimpleConvergenceTest class - test if a value is close to zero or very small compared to another value. VectorConvergenceTest class - test convergence of vectors. This class has two additional properties. The Norm property specifies which norm is to be used when calculating the size of the vector - the VectorConvergenceNorm enum values: EuclidianNorm / Maximum / SumOfAbsoluteValues. The ErrorMeasure property specifies how the error is to be measured – VectorConvergenceErrorMeasure enum values: Norm / Componentwise ConvergenceTestCollection class - represent a combination of tests. The Quantifier property is a ConvergenceTestQuantifier enum that specifies how the tests in the collection are to be combined: Any / All  The AlgorithmHelper Class inherits from IterativeAlgorithm<T> and exposes two methods for convergence testing. IsValueWithinTolerance<T> method - determines whether a value is close to another value to within an algorithm's requested tolerance. IsIntervalWithinTolerance<T> method - determines whether an interval is within an algorithm's requested tolerance.

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >