Search Results

Search found 4283 results on 172 pages for 'initial'.

Page 137/172 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Finding the problem on a partially succeeded build

    - by Martin Hinshelwood
    Now that I have the Build failing because of a genuine bug and not just because of a test framework failure, lets see if we can trace through to finding why the first test in our new application failed. Lets look at the build and see if we can see why there is a red cross on it. First, lets open that build list. On Team Explorer Expand your Team Project Collection | Team Project and then Builds. Double click the offending build. Figure: Opening the Build list is a key way to see what the current state of your software is.   Figure: A test is failing, but we can now view the Test Results to find the problem      Figure: You can quite clearly see that the test has failed with “The device is not ready”. To me the “The Device is not ready” smacks of a System.IO exception, but it passed on my local computer, so why not on the build server? Its a FaultException so it is most likely coming from the Service and not the client, so lets take a look at the client method that the test is calling: bool IProfileService.SaveDefaultProjectFile(string strComputerName) { ProjectFile file = new ProjectFile() { ProjectFileName = strComputerName + "_" + System.DateTime.Now.ToString("yyyyMMddhhmmsss") + ".xml", ConnectionString = "persist security info=False; pooling=False; data source=(local); application name=SSW.SQLDeploy.vshost.exe; integrated security=SSPI; initial catalog=SSWSQLDeployNorthwindSample", DateCreated = System.DateTime.Now, DateUpdated = System.DateTime.Now, FolderPath = @"C:\Program Files\SSW SQL Deploy\SampleData\", IsComplete=false, Version = "1.3", NewDatabase = true, TimeOut = 5, TurnOnMSDE = false, Mode="AutomaticMode" }; string strFolderPath = "D:\\"; //LocalSettings.ProjectFileBasePath; string strFileName = strFolderPath + file.ProjectFileName; try { using (FileStream fs = new FileStream(strFileName, FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(ProjectFile)); using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(fs)) { serializer.WriteObject(writer, file); } } } catch (Exception ex) { //TODO: Log the exception throw ex; return false; } return true; } Figure: You can see on lines 9 and 18 that there are calls being made to specific folders and disks. What is wrong with this code? What assumptions mistakes could the developer have made to make this look OK: That every install would be to “C:\Program Files\SSW SQL Deploy” That every computer would have a “D:\\” That checking in code at 6pm because the had to go home was a good idea. lets solve each of these problems: We are in a web service… lets store data within the web root. So we can call “Server.MapPath(“~/App_Data/SSW SQL Deploy\SampleData”) instead. Never reference an explicit path. If you need some storage for your application use IsolatedStorage. Shelve your code instead. What else could have been done? Code review before check-in – The developer should have shelved their code and asked another dev to look at it. Use Defensive programming – Make sure that any code that has the possibility of failing has checks. Any more options? Let me know and I will add them. What do we do? The correct things to do is to add a Bug to the backlog, but as this is probably going to be fixed in sprint, I will add it directly to the sprint backlog. Right click on the failing test Select “Create Work Item | Bug” Figure: Create an associated bug to add to the backlog. Set the values for the Bug making sure that it goes into the right sprint and Area. Make your steps to reproduce as explicit as possible, but “See test” is valid under these circumstances.   Figure: Add it to the correct Area and set the Iteration to the Area name or the Sprint if you think it will be fixed in Sprint and make sure you bring it up at the next Scrum Meeting. Note: make sure you leave the “Assigned To” field blank as in Scrum team members sign up for work, you do not give it to them. The developer who broke the test will most likely either sign up for the bug, or say that they are stuck and need help. Note: Visual Studio has taken care of associating the failing test with the Bug. Save… Technorati Tags: WCF,MSTest,MSBuild,Team Build 2010,Team Test 2010,Team Build,Team Test

    Read the article

  • Formal Languages, Inductive Proofs &amp; Regular Expressions

    - by MarkPearl
    So I am slogging away at my UNISA stuff. I have just finished doing the initial once non stop read through the first 11 chapters of my COS 201 Textbook - “Introduction to Computer Theory 2nd Edition” by Daniel Cohen. It has been an interesting couple of days, with familiar concepts coming up as well as some new territory. In this posting I am going to cover the first couple of chapters of the book. Let start with Formal Languages… What exactly is a formal language? Pretty much a no duh question for me but still a good one to ask – a formal language is a language that is defined in a precise mathematical way. Does that mean that the English language is a formal language? I would say no – and my main motivation for this is that one can have an English sentence that is correct grammatically that is also ambiguous. For example the ambiguous sentence: "I once shot an elephant in my pyjamas.” For this and possibly many other reasons that I am unaware of, English is termed a “Natural Language”. So why the importance of formal languages in computer science? Again a no duh question in my mind… If we want computers to be effective and useful tools then we need them to be able to evaluate a series of commands in some form of language that when interpreted by the device no confusion will exist as to what we were requesting. Imagine the mayhem that would exist if a computer misinterpreted a command to print a document and instead decided to delete it. So what is a Formal Language made up of… For my study purposes a language is made up of a finite alphabet. For a formal language to exist there needs to be a specification on the language that will describe whether a string of characters has membership in the language or not. There are two basic ways to do this: By a “machine” that will recognize strings of the language (e.g. Finite Automata). By a rule that describes how strings of a language can be formed (e.g. Regular Expressions). When we use the phrase “string of characters”, we can also be referring to a “word”. What is an Inductive Proof? So I am not to far into my textbook and of course it starts referring to proofs and different types. I have had to go through several different approaches of proofs in the past, but I can never remember their formal names , so when I saw “inductive proof” I thought to myself – what the heck is that? Google to the rescue… An inductive proof is like a normal proof but it employs a neat trick which allows you to prove a statement about an arbitrary number n by first proving it is true when n is 1 and then assuming it is true for n=k and showing it is true for n=k+1. The idea is that if you want to show that someone can climb to the nth floor of a fire escape, you need only show that you can climb the ladder up to the fire escape (n=1) and then show that you know how to climb the stairs from any level of the fire escape (n=k) to the next level (n=k+1). Does this sound like a form of recursion? No surprise then that in the same chapter they deal with recursive definitions. An example of a recursive definition for the language EVEN would the 3 rules below: 2 is in EVEN If x is in EVEN then so is x+2 The only elements in the set EVEN are those that be produced by the rules above. Nothing to exciting… So if a definition for a language is done recursively, then it makes sense that the language can be proved using induction. Regular Expressions So I am wondering to myself what use is this all – in fact – I find this the biggest challenge to any university material is that it is quite hard to find the immediate practical applications of some theory in real life stuff. How great was my joy when I suddenly saw the word regular expression being introduced. I had been introduced to regular expressions on Stack Overflow where I was trying to recognize if some text measurement put in by a user was in a valid form or not. For instance, the imperial system of measurement where you have feet and inches can be represented in so many different ways. I had eventually turned to regular expressions as an easy way to check if my parser could correctly parse the text or not and convert it to a normalize measurement. So some rules about languages and regular expressions… Any finite language can be represented by at least one if not more regular expressions A regular expressions is almost a rule syntax for expressing how regular languages can be formed regular expressions are cool For a regular expression to be valid for a language it must be able to generate all the words in the language and no other words. This is important. It doesn’t help me if my regular expression parses 100% of my measurement texts but also lets one or two invalid texts to pass as well. Okay, so this posting jumps around a bit – but introduces some very basic fundamentals for the subject which will be built on in later postings… Time to go and do some practical examples now…

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Inserting and Deleting Sub Rows in GridView

    - by Vincent Maverick Durano
    A user in the forums (http://forums.asp.net) is asking how to insert  sub rows in GridView and also add delete functionality for the inserted sub rows. In this post I'm going to demonstrate how to this in ASP.NET WebForms.  The basic idea to achieve this is we just need to insert row data in the DataSource that is being used in GridView since the GridView rows will be generated based on the DataSource data. To make it more clear then let's build up a sample application. To start fire up Visual Studio and create a WebSite or Web Application project and then add a new WebForm. In the WebForm ASPX page add this GridView markup below:   1: <asp:gridview ID="GridView1" runat="server" AutoGenerateColumns="false" onrowdatabound="GridView1_RowDataBound"> 2: <Columns> 3: <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> 4: <asp:TemplateField HeaderText="Header 1"> 5: <ItemTemplate> 6: <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> 7: </ItemTemplate> 8: </asp:TemplateField> 9: <asp:TemplateField HeaderText="Header 2"> 10: <ItemTemplate> 11: <asp:TextBox ID="TextBox2" runat="server"></asp:TextBox> 12: </ItemTemplate> 13: </asp:TemplateField> 14: <asp:TemplateField HeaderText="Header 3"> 15: <ItemTemplate> 16: <asp:TextBox ID="TextBox3" runat="server"></asp:TextBox> 17: </ItemTemplate> 18: </asp:TemplateField> 19: <asp:TemplateField HeaderText="Action"> 20: <ItemTemplate> 21: <asp:LinkButton ID="LinkButton1" runat="server" onclick="LinkButton1_Click" Text="Insert"></asp:LinkButton> 22: </ItemTemplate> 23: </asp:TemplateField> 24: </Columns> 25: </asp:gridview>   Then at the code behind source of ASPX page you can add this codes below:   1: private DataTable FillData() { 2:   3: DataTable dt = new DataTable(); 4: DataRow dr = null; 5:   6: //Create DataTable columns 7: dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); 8:   9: //Create Row for each columns 10: dr = dt.NewRow(); 11: dr["RowNumber"] = 1; 12: dt.Rows.Add(dr); 13:   14: dr = dt.NewRow(); 15: dr["RowNumber"] = 2; 16: dt.Rows.Add(dr); 17:   18: dr = dt.NewRow(); 19: dr["RowNumber"] = 3; 20: dt.Rows.Add(dr); 21:   22: dr = dt.NewRow(); 23: dr["RowNumber"] = 4; 24: dt.Rows.Add(dr); 25:   26: dr = dt.NewRow(); 27: dr["RowNumber"] = 5; 28: dt.Rows.Add(dr); 29:   30: //Store the DataTable in ViewState for future reference 31: ViewState["CurrentTable"] = dt; 32:   33: return dt; 34:   35: } 36:   37: private void BindGridView(DataTable dtSource) { 38: GridView1.DataSource = dtSource; 39: GridView1.DataBind(); 40: } 41:   42: private DataRow InsertRow(DataTable dtSource, string value) { 43: DataRow dr = dtSource.NewRow(); 44: dr["RowNumber"] = value; 45: return dr; 46: } 47: //private DataRow DeleteRow(DataTable dtSource, 48:   49: protected void Page_Load(object sender, EventArgs e) { 50: if (!IsPostBack) { 51: BindGridView(FillData()); 52: } 53: } 54:   55: protected void LinkButton1_Click(object sender, EventArgs e) { 56: LinkButton lb = (LinkButton)sender; 57: GridViewRow row = (GridViewRow)lb.NamingContainer; 58: DataTable dtCurrentData = (DataTable)ViewState["CurrentTable"]; 59: if (lb.Text == "Insert") { 60: //Insert new row below the selected row 61: dtCurrentData.Rows.InsertAt(InsertRow(dtCurrentData, row.Cells[0].Text + "-sub"), row.RowIndex + 1); 62:   63: } 64: else { 65: //Delete selected sub row 66: dtCurrentData.Rows.RemoveAt(row.RowIndex); 67: } 68:   69: BindGridView(dtCurrentData); 70: ViewState["CurrentTable"] = dtCurrentData; 71: } 72:   73: protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { 74: if (e.Row.RowType == DataControlRowType.DataRow) { 75: if (e.Row.Cells[0].Text.Contains("-sub")) { 76: ((LinkButton)e.Row.FindControl("LinkButton1")).Text = "Delete"; 77: } 78: } 79: }   As you can see the code above is pretty straight forward and self explainatory but just to give you a short explaination the code above is composed of three (3) private methods which are the FillData(), BindGridView and InsertRow(). The FillData() method is a method that returns a DataTable and basically creates a dummy data in the DataTable to be used as the GridView DataSource. You can replace the code in that method if you want to use actual data from database but for the purpose of this example I just fill the DataTable with a dummy data on it. The BindGridVew is a method that handles the actual binding of GridVew. The InsertRow() is a method that returns a DataRow. This method handles the insertion of the sub row. Now in the LinkButton OnClick event, we casted the sender to a LinkButton to determine the specific object that fires up the event and get the row values. We then reference the Data from ViewState to get the current data that is being used in the GridView. If the LinkButton text is "Insert" then we will insert new row to the DataSource ( in this case the DataTable) based on the rowIndex if not then Delete the sub row that was added. Here are some screen shots of the output below: On initial load:   After inserting a sub row:   That's it! I hope someone find this post useful!   Technorati Tags: ASP.NET,C#,GridView

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Some New .NET Toys (Repost)

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit. It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft. The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of. The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today. Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1. You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)? Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox). With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target. Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks. Other projects can now reference this portable library and be provided assemblies custom built to their environment. This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight. You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model. Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here. You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/. This site allows you to easily search for small samples of code related to a particular technology or platform. If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs. You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others. Samples are packaged into the VS .vsix format and include any necessary references/dependencies. By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done. Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need. I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities. Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results. First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from. For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings. All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Chart Control in ASP.Net 4 – Second Part

    - by sreejukg
      Couple of weeks before, I have written an introduction about the chart control available in .Net framework. In that article, I explained the basic usage of the chart control with a simple example. You can read that article from the url http://weblogs.asp.net/sreejukg/archive/2010/12/31/getting-started-with-chart-control-in-asp-net-4-0.aspx. In this article I am going to demonstrate how one can generate various types of charts that can be generated easily using the ASP.Net chart control. Let us recollect the data sample we were working in the previous sample. The following is the data I used in the previous article. id SaleAmount SalesPerson SaleType SaleDate CompletionStatus (%) 1 1000 Jack Development 2010-01-01 100 2 300 Mills Consultancy 2010-04-14 90 3 4000 Mills Development 2010-05-15 80 4 2500 Mike eMarketting 2010-06-15 40 5 1080 Jack Development 2010-07-15 30 6 6500 Mills Consultancy 2010-08-24 65 In this article I am going to demonstrate various graphical reports generated from this data with the help of chart control. The following are the reports I am going to generate 1. Representation of share of Sales by each Sales person. 2. Representation of share of sales data according to sale type 3. Representation of sales progress over time period I am going to demonstrate how to bind the chart control programmatically. In order to facilitate this, I created an aspx page named “SalesAnalysis.Aspx” to my project. In the page I added the following controls 1. Dropdownlist control – with id ddlAnalysisType, user will use this to choose the type of chart they want to see. 2. A Button control – with id btnSubmit , by clicking this button, the chart based on the dropdownlist selection will be shown to the user 3. A label Control – with id lblMessage, to display the message to the user, initially this will ask the user to select an option and click on the button. 4. Chart control – with id chrtAnalysis, by default, I set visible = false so that during the page load the chart will be hidden to the users. The following is the initial output of the page. Generating chart for salesperson share Now from Visual Studio, I have double clicked on the button; it created the event handler btnSubmit_Click. In the button Submit event handler, I am using a switch case to execute the corresponding SQL statement and bind it to the chart control. The below is the code for generating the sales person share chart using a pie chart. The above code produces the following output The steps for creating the above chart can be summarized as follows. You specify a chart area, then a series and bind the chart to some x and y values. That is it. If you want to control the chart size and position, you can set the properties for the ChartArea.Position element. For e.g. in the previous code, after instantiating the chart area, setting the below code will give you a bigger pie chart. c.Position.Width = 100; c.Position.Height = 100; The width and height values are in percentage. In this case the chart will be generated by utilizing all the width and height of the chart object. See the output updated with the width and height set to 100% each. Generate Chart for sales type share Now for generating the chart according to the sales type, you just need to change the SQL query and x and y values of the chart. The Sql query used is “SELECT SUM(saleAmount) amount, SaleType from SalesData group by SaleType” and the X-Value is amount and Y-Values is SaleType. s.XValueMember = "SaleType"; s.YValueMembers = "amount"; After modifying the above code with these, the following output is generated. Generate Chart for sales progress over time period For generating the progress of sale chart against sales amount / period, line chart is the ideal tool. In order to facilitate the line chart, you can use Chart Type as System.Web.UI.DataVisualization.Charting.SeriesChartType.Line. Also we need to retrieve the amount and sales date from the data source. I have used the following query to facilitate this. “SELECT SaleAmount, SaleDate FROM SalesData” The output for the line chart is as follows Now you have seen how easily you can build various types of charts. Chart control is an excellent one that helps you to bring business intelligence to your applications. What I demonstrated in only a small part of what you can do with the chart control. Refer http://msdn.microsoft.com/en-us/library/dd456632.aspx for further reading. If you want to get the project files in zip format, post your email below. Hope you enjoyed reading this article.

    Read the article

  • ORA-4030 Troubleshooting

    - by [email protected]
    QUICKLINK: Note 399497.1 FAQ ORA-4030 Note 1088087.1 : ORA-4030 Diagnostic Tools [Video]   Have you observed an ORA-0430 error reported in your alert log? ORA-4030 errors are raised when memory or resources are requested from the Operating System and the Operating System is unable to provide the memory or resources.   The arguments included with the ORA-4030 are often important to narrowing down the problem. For more specifics on the ORA-4030 error and scenarios that lead to this problem, see Note 399497.1 FAQ ORA-4030.   Looking for the best way to diagnose? There are several available diagnostic tools (error tracing, 11g Diagnosibility, OCM, Process Memory Guides, RDA, OSW, diagnostic scripts) that collectively can prove powerful for identifying the cause of the ORA-4030.    Error Tracing   The ORA-4030 error usually occurs on the client workstation and for this reason, a trace file and alert log entry may not have been generated on the server side.  It may be necessary to add additional tracing events to get initial diagnostics on the problem. To setup tracing to trap the ORA-4030, on the server use the following in SQLPlus: alter system set events '4030 trace name heapdump level 536870917;name errorstack level 3';Once the error reoccurs with the event set, you can turn off  tracing using the following command in SQLPlus:alter system set events '4030 trace name context off; name context off';NOTE:   See more diagnostics information to collect in Note 399497.1  11g DiagnosibilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4030 occurs.  For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file may contain vital information about what led to the error condition.    Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Informationto Support[Video]  Oracle Configuration Manager (OCM) Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations. Oracle Configuration Manager Quick Start Guide Note 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    General Process Memory Guides   An ORA-4030 indicates a limit has been reached with respect to the Oracle process private memory allocation.    Each Operating System will handle memory allocations with Oracle slightly differently. Solaris     Note 163763.1Linux       Note 341782.1IBM AIX   Notes 166491.1 and 123754.1HP           Note 166490.1Windows Note 225349.1, Note 373602.1, Note 231159.1, Note 269495.1, Note 762031.1Generic    Note 169706.1   RDAThe RDA report will show more detailed information about the database and Server Configuration. Note 414966.1 RDA Documentation Index Download RDA -- refer to Note 314422.1 Remote Diagnostic Agent (RDA) 4 - Getting Started OS Watcher (OSW)This tool is designed to gather Operating System side statistics to compare with the findings from the database.  This is a key tool in cases where memory usage is higher than expected on the server while not experiencing ORA-4030 errors currently. Reference more details on setup and usage in Note 301137.1 OS Watcher User Guide Diagnostic Scripts   Refer to Note 1088087.1 : ORA-4030 Diagnostic Tools [Video] Common Causes/Solutions The ORA-4030 can occur for a variety of reasons.  Some common causes are:   * OS Memory limit reached such as physical memory and/or swap/virtual paging.   For instance, IBM AIX can experience ORA-4030 issues related to swap scenarios.  See Note 740603.1 10.2.0.4 not using large pages on AIX for more on that problem. Also reference Note 188149.1 for pointers on 10g and stack size issues.* OS limits reached (kernel or user shell limits) that limit overall, user level or process level memory * OS limit on PGA memory size due to SGA attach address           Reference: Note 1028623.6 SOLARIS How to Relocate the SGA* Oracle internal limit on functionality like PL/SQL varrays or bulk collections. ORA-4030 errors will include arguments like "pl/sql vc2" "pmucalm coll" "pmuccst: adt/re".  See Coding Pointers for pointers on application design to get around these issues* Application design causing limits to be reached* Bug - space leaks, heap leaks   ***For reference to the content in this blog, refer to Note.1088267.1 Master Note for Diagnosing ORA-4030

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Meet our 2009 Oracle Graduates in South Africa

    - by anca.rosu
    Focusing on the broader Oracle community, Oracle South Africa initiated its first skills development programme in May 1988. Since its inception the programme has developed and improved and every year more graduates are taken on board. The Oracle Graduate Programme is made up of specific learning paths designed around customer, partner and Oracle specifications and is structured to meet the urgent skills requirements in the Oracle “economy”. The training programmes have a specific duration and incorporate both theoretical and practical application of Oracle product sets. It is aimed at creating: Meaningful employment for graduates; Learning opportunities for individuals within the organization so that career growth opportunities are exploited to the fullest; Capacity building for small enterprises which is aligned to Oracle’s Enterprise Development Programme Meet our five graduates who joined us in December 2008 and have spent over a year with us! Let’s get their initial feedback on the graduate programme and on their assignment to Jordan. Lector   On the Oracle Graduate Programme: “The Oracle Graduate Programme is an experience of a life time. I would not trade it for anything. It’s challenging and rewarding. I am proud and happy to be in an organization like Oracle” On the assignment in Jordan: "Friendly, welcoming people, world class instructors always willing to go the extra mile. What more can you ask for?"   Lungile On the Oracle Graduate Programme: “I joined Oracle as part of the graduate intake for pre-sales in order to develop my skills and knowledge. Working at Oracle has been an overwhelmingly positive experience as it has encouraged me to progress with my personal development. I am hugely grateful. It has been a great challenge and an awesome opportunity.” On the assignment to Jordan: “Going to Jordan was a great opportunity and the experience of a lifetime. The people were very welcoming and friendly. The culture was totally different from ours - the food, the clothes and the weather. It was an amazingly different experience to work from Sunday to Thursday with Friday and Saturday as the weekend.” Thabo On the Oracle Graduate Programme: “Life is an infinite learning path. I truly value growth. I believe for one to grow, one needs to be challenged to your full potential. The Oracle Graduate Programme offers real growth – and so much more.” On the assignment to Jordan: “I was amazed by the cultural differences. I now understood that to be part of the global community, I must embrace our similarities and understand our differences.”   Albeauty On the Oracle Graduate Programme: “Responsibility, dedication, focus and taking initiative … these are the key points I learned from Oracle. It is such an honour to finally be part of the Oracle family. The graduate programme itself was a great experience as I managed to learn how Oracle operates – it has been the highlight of my year. I believe that my hard work will assist in the growth of the company.” On the Jordan assignment: “A memory worth embracing. Going to Jordan was a great opportunity as I learned a lot with respect to integration between different cultures and getting to adapt to all things different. I, along with almost every other graduate, discovered that Oracle is far more than a database company. Now I know there is far more to the ‘Big Red’ name.” Emmanuel On the Oracle Graduate Programme: “The programme gave me invaluable exposure to the ICT sector and also provided an opportunity to travel, network and exchange ideas with others. The formal training helped me to improve my presentation skills and gave me a better understanding of business etiquette and communication.” On the assignment to Jordan: “It was my first trip abroad. It was a great chance to get to know each other. I had the opportunity to share ideas, share personal stuff as a team. We met experts who gave us superb training in Oracle Technologies. It was great.”   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com.   Technorati Tags: Oracle community,South Africa,Graduate Programme,Jordan,Technologies

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Windows 7 doesn't boot after Ubuntu install

    - by Omu
    I had windows 7 installed on my pc, then I installed Ubuntu 10.10/ During the installation process I have chosen to manually set my partitions: I set a 10GB drive for ubuntu root 1GB drive for swap and for boot drive I've chosen the one used by windows 7 Now I can boot ubuntu, I have the windows 7 option in the boot list, but when I choose Windows 7, it shows me a black screen for a second and returns back to boot screen. Boot Info Script 0.55 dated February 15th, 2010 ============================= Boot Info Summary: ============================== = Windows is installed in the MBR of /dev/sda sda1: _________________________________________________________________________ File system: ntfs Boot sector type: Grub 2 Boot sector info: Grub 2 is installed in the boot sector of sda1 and looks at sector 304908237 of the same hard drive for core.img, but core.img can not be found at this location. No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files/dirs: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda2: _________________________________________________________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files/dirs: sda3: _________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: _________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 10.10 Boot files/dirs: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda4: _________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: =========================== Drive/Partition Info: ============================= Drive: sda ___________________ _____________________________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start End Size Id System /dev/sda1 * 63 62,894,474 62,894,412 7 HPFS/NTFS /dev/sda2 62,894,478 291,579,749 228,685,272 7 HPFS/NTFS /dev/sda3 291,579,811 309,157,937 17,578,127 5 Extended /dev/sda5 291,579,813 309,157,937 17,578,125 83 Linux /dev/sda4 309,159,936 312,580,095 3,420,160 82 Linux swap / Solaris blkid -c /dev/null: ____________________________________________________________ Device UUID TYPE LABEL /dev/sda1 1266BB2766BB0A8D ntfs /dev/sda2 BEDBF1147C76F703 ntfs DATA /dev/sda3: PTTYPE="dos" /dev/sda4 dd38226d-c7c9-4ae5-a726-6d18d34a22e4 swap /dev/sda5 e1dafd1c-f855-406b-8f9a-f9d527c70255 ext4 /dev/sda: PTTYPE="dos" ============================ "mount | grep ^/dev output: =========================== Device Mount_Point Type Options /dev/sda5 / ext4 (rw,errors=remount-ro,commit=0) =========================== sda5/boot/grub/grub.cfg: =========================== # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=640x480 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 set locale_dir=($root)/boot/grub/locale set lang=en insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.35-22-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux /boot/vmlinuz-2.6.35-22-generic root=UUID=e1dafd1c-f855-406b-8f9a-f9d527c70255 ro quiet splash initrd /boot/initrd.img-2.6.35-22-generic } menuentry 'Ubuntu, with Linux 2.6.35-22-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 echo 'Loading Linux 2.6.35-22-generic ...' linux /boot/vmlinuz-2.6.35-22-generic root=UUID=e1dafd1c-f855-406b-8f9a-f9d527c70255 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-22-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set e1dafd1c-f855-406b-8f9a-f9d527c70255 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set 1266bb2766bb0a8d chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sda5/etc/fstab: =============================== # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda5 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda4 during installation UUID=dd38226d-c7c9-4ae5-a726-6d18d34a22e4 none swap sw 0 0 =================== sda5: Location of files loaded by Grub: =================== 156.1GB: boot/grub/core.img 156.3GB: boot/grub/grub.cfg 149.9GB: boot/initrd.img-2.6.35-22-generic 156.3GB: boot/vmlinuz-2.6.35-22-generic 149.9GB: initrd.img 156.3GB: vmlinuz

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >