Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 786/906 | < Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >

  • Technical workshop with the gurus: Architecting Oracle Database-As-A-Service (DBaaS)

    - by Javier Puerta
    Hardware and Software, Engineered to Work Together inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- OCTOBER 2013 Invitation: Architecting Oracle Database-As-A-Service (DBaaS) Stay Connected Sign up for Specific Updates Architecting Oracle Database-As-A-Service (DBaaS) Dear partner, We are pleased to invite you to a 2-day workshop dedicated to EMEA partners on "Architecting Oracle Private Database Cloud & Delivering Database-As-A-Service (DBaaS)". This exclusive workshop will be delivered by Product Management and Product Development from Oracle HQ and focuses on the main theme CIOs are tackling with in the last decade: Consolidation to Private Cloud. For many customers the journey to consolidation has led to DBaaS Cloud deployments to significantly reduce costs and offer agile IT services. With the recent launch of Oracle Database 12c, the game really has changed in terms of what Oracle offers and how database clouds can be deployed. REGISTER NOW Who should attend: Enterprise Architects Infrastructure Architects DB Architects from System Integrators and large Independent Software Vendors. Take this opportunity to learn from the gurus, how you can help your customers maximize on their cloud consolidation strategies. The workshops main focus is service delivery, which includes standardization and consolidation, and how you would help your customers transform their current IT infrastructure to a service delivery model. It will discuss best practices and reviews customer examples that have successfully implemented a database cloud. The agenda is split into two days sessions: Day 1: Overview & Planning Database Cloud - Demos Customer Case Studies Database 12c Day 2: Database Cloud - Design Database Cloud - Implementation EM Cloud Control DBaaS on Engineered Systems Question and Answers Attendance is free of charge for qualified Oracle partners - Register now for one of the below sessions: Date Country Location 5 & 6 November 2013  United Kingdom   Manchester 7 & 8 November 2013  Germany  Munich 11 & 12 November 2013  Netherlands  Amsterdam 14 & 15 November 2013  Turkey Istanbul 18 & 19 November 2013  Austria Vienna Looking forward to seeing you! Javier Puerta Director, Core Technology Partner Programs EMEA Prashant Barot Director, Core Technology     Resources OPN Portal OPN Enablement News Blog Oracle Partner Store Use Oracle Trademark in Google AdWords OPN Events Calendar OPN Information Center OPN Solutions Catalog Promote Your Events on Oracle Calendar Copyright © 2013, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Post MIX10 Decompression

    - by Dave Campbell
    With a big dose of reality, I walked into this place this morning and found out "yeah, I really do write .NET web apps and MS Access for a living" :( ... but it pays the bills and I've gotten *way* used to eating 3 times a day :) MIX10 was great, although the buzz didn't seem as big as MIX09, and I'm not sure why. It also seemed like a different crowd and other folks I talked to agreed with that. Of course now I can outwardly admit that the "Windows Phone 7 Series" is programmed with Silverlight ... how cool is that? I've been biting my tongue about that info for over a month! I cloistered myself in Ballroom A for the week, not counting the Keynotes. That's where the phone sessions were located. I tried to collect the full set, but ended up bailing on the last one because it was ending at the time that MIX10 was ending, and I hadn't spent a whole lot of time in 'The Commons'. I met a bunch of folks I've blogged about, or exchanged email with, and that's always fun. Renewed associations with folks I only see once or twice a year and way too long a list and don't want to mention some and leave off others... I did have an opportunity to meet Charles Petzold... wow that was interesting... I got into Windows development through Charles' Programming Windows 3.1 book 'back in the day' ... couldn't find anyone at Honeywell wanted to join my journey, so it was just me and 'Chuck' :) ... read every word of that book more than once... all marked up, tags sticking out of it. And now he's writing a WP7 book ... gotta get it: Free ebook: Programming Windows Phone 7 Series (DRAFT Preview) I went through my Big List-o-BlogsTM last night and it took over 2 hours because of all the new content since MIX10. I've got 90 posts tagged as of 9PM on 3/21. If everybody stopped right now, it would take me 9 days to push what I have now, so you'll have to be patient! I had another event on Thursday that was *extremely* tiring, so I ended up staying over another night. I drove back into the strip on Friday morning to try to find a non-cheesy souvenir for my wife, and didn't find much. Then I went to Blueberry Hill restaurant for 3 eggs, 3 strips of bacon, and 3 awesome potato pancakes. Check them out if you have time! And then hit the road. In case anyone is wondering, the 2-1/2 hour drive I took across Hoover Dam on Sunday afternoon only took 30 minutes on Friday afternoon... that was a more normal trip! I thoroughly enjoyed the time I spent with everyone. Thanks to John Papa and his crew for the great Insider's party on Monday night... the Blues Brothers were a fun surprise and they did a good job! And the swag was great... thanks to all the contributors for a fun evening at their expense! All I can say is stay tuned, go to live.visitmix.com/videos and watch everything, get the phone tools, start working... everything's different and everything's fun... jump in, it's all Silverlight! Stay in the 'Light! Technorati Tags: Silverlight    Silverlight 4    Windows Phone     MIX10

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • help: cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that has contained my 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. I also tried to install on two older 250GB seagate drives in the same computer. During every attempt I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). Installation takes about 30 minutes on the 250GB drives, and about 60 minutes on the 3TB drive - not sure why. When I install on the 250GB drives, the install process finishes, the computer reboots (after the install DVD is removed), but I get a grub error 15. It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB swap 8GB /boot ext4 3TB / ext4 But I've also tried the following, just in case it matters: 100MB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue Note: I have two DVD burners in the system, hence the duplicate line above. The same install and reboot on the 250GB drives generates "grub error 15". Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04) HDD == seagate 1TB SATA3 @ 7200 rpm (current install 64-bit v10.04) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) The current ubuntu 64-bit v10.04 system boots and runs fine on a seagate 1TB.

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • What I Expect From Myself This Year

    - by Lee Brandt
    I am making it a point not to call them resolutions, because the word has become an institution and is beginning to have no meaning. That's why I end up not keeping my resolutions, I think. So in the spirit of holding myself to my own commitments, I will make a plan and some realistic goals. 1.) Lose weight. Everyone has this on their list, but I am going to be conservative and specific. I currently weigh 393lbs. (yeah, I know). So I want to plan to lose 10lbs per month, that's 1lb. every three days, that shouldn't be difficult if I stick to my diet and exercise plan. - How do I do this?     - Diet: vegetarian. Since I already know I have high blood pressure and borderline high cholesterol, a meat-free diet is in order. I was vegan for a little over 2 years in 2006-2008, I think I can handle vegetarian.     - Exercise: at least 3 times (preferably every day) a week for 30 minutes. It has to be something that gets my heart rate up, or burns in my muscles. I can walk for cardio to start and mild calisthenics (girly push-ups, crunches, etc.).         - Move: I spend all my time behind the computer. I have recently started to use a slight variation of the Pomodoro Technique (my Pomodoros are 50 minutes instead of 25). During my 10 minutes every hour to answer emails, chats, etc., I will take a few minutes to stretch. 2.) Get my wife pregnant. We've been talking about it for years. Now that she is done with graduate school and I have a great job, now's the time. We'll be the oldest parents in the PTA most likely, but I don't care. 3.) Blog More. Another favorite among bloggers, but I do have about six drafts for blog posts started. The topics are there all I need to do is flesh out the post. This can be the first hour of any computer time I have after work. As soon as I am done exercising, shower and post. 4.) Speak less. Most people want to speak more. I want to concentrate on the places that I enjoy and that can really use the speakers (like local code camps), rather than trying to be some national speaker. I love speaking at conferences, but I need to spend some more time at home if we're going to get pregnant. 5.) Read more. I got a Kindle for Christmas and I am loving it so far. I have almost finished Treasure Island, and am getting ready to pick my next book. I will probably read a lot of classics for 2 reasons: (1) they teach deep lessons and (2) most are free for the Kindle. 6.) Find my religion. I was raised Southern Baptist, but I want to find my own way. I've been wanting to go to the local Unitarian Church, so I will make a point to go before the end of March. I also want to add a few religious books to my reading list. My boss bought me a copy of Lee Strobel's The Case for Christ: A Journalist's Personal Investigation of the Evidence for Jesus , and I have a copy of Bruce Feiler's Abraham: A Journey to the Heart of Three Faiths (P.S.) . I will start there. Seems like a lot now that I spell it out like this. But these are only starters. I am forty years old. I cannot keep living like I am twenty anymore. So here we go, 2011.

    Read the article

  • A Closer Look at the HiddenInput Attribute in MVC 2

    - by Steve Michelotti
    MVC 2 includes an attribute for model metadata called the HiddenInput attribute. The typical usage of the attribute looks like this (line #3 below): 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } So if you displayed your PersonViewModel with Html.EditorForModel() or Html.EditorFor(m => m.Id), the framework would detect the [HiddenInput] attribute metadata and produce HTML like this: 1: <input id="Id" name="Id" type="hidden" value="21" /> This is pretty straight forward and allows an elegant way to keep the technical key for your model (e.g., a Primary Key from the database) in the HTML so that everything will be wired up correctly when the form is posted to the server and of course not displaying this value visually to the end user. However, when I was giving a recent presentation, a member of the audience asked me (quite reasonably), “When would you ever set DisplayValue equal to true when using a HiddenInput?” To which I responded, “Well, it’s an edge case. There are sometimes when…er…um…people might want to…um…display this value to the user.” It was quickly apparent to me (and I’m sure everyone else in the room) what a terrible answer this was. I realized I needed to have a much better answer here. First off, let’s look at what is produced if we change our view model to use “true” (which is equivalent to use specifying [HiddenInput] since “true” is the default) on line #3: 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = true)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } Will produce the following HTML if rendered from Htm.EditorForModel() in your view: 1: <div class="editor-label"> 2: <label for="Id">Id</label> 3: </div> 4: <div class="editor-field"> 5: 21<input id="Id" name="Id" type="hidden" value="21" /> 6: <span class="field-validation-valid" id="Id_validationMessage"></span> 7: </div> The key is line #5. We get the text of “21” (which happened to be my DB Id in this instance) and also a hidden input element (again with “21”). So the question is, why would one want to use this? The best answer I’ve found is contained in this MVC 2 whitepaper: When a view lets users edit the ID of an object and it is necessary to display the value as well as to provide a hidden input element that contains the old ID so that it can be passed back to the controller. Well, that actually makes sense. Yes, it seems like something that would happen *rarely* but, for those instances, it would enable them easily. It’s effectively equivalent to doing this in your view: 1: <%: Html.LabelFor(m => m.Id) %> 2: <%: Model.Id %> 3: <%: Html.HiddenFor(m => m.Id) %> But it’s allowing you to specify it in metadata on your view model (and thereby take advantage of templated helpers like Html.EditorForModel() and Html.EditorFor()) rather than having to explicitly specifying everything in your view.

    Read the article

  • Release Management as Orchestra

    - by ericajanine
    I read an excellent, concise article (http://www.buildmeister.com/articles/software_release_management_best_practices) on the basics of release management practices. In the article, it states "Release Management is often likened to the conductor of an orchestra, with the individual changes to be implemented the various instruments within it." I played in music ensembles for years, so this is especially close to my heart as example. I learned most of my discipline from hours and hours of practice at the hand of a very skilled conductor and leader. I also learned that the true magic in symphonic performance is one where everyone involved is focused on one sound, one goal. In turn, that solid focus creates a sound and experience bigger than just mechanics alone accomplish. In symphony, a conductor's true purpose is to make you, a performer, better so the overall sound and end product is better. The big picture (the performance of the composition) is the end-game, and all musicians in the orchestra know without question their part makes up an important but incomplete piece of that performance. A good conductor works with each section (e.g. group) to ensure their individual pieces are solid. Let's restate: The conductor leads and is responsible for ensuring those pieces are solid. While the performers themselves are doing the work, the conductor is the final authority on when the pieces are ready or not. If not, the conductor initiates the efforts to get them ready or makes the decision to scrap their parts altogether for the sake of an overall performance. Let it sink in, because it's clear--It is not the performer's call if they play their part as agreed, it's the conductor's final call to allow it. In comparison, if a software release manager is a conductor, the only way for that manager to be effective is to drive the overarching process and execution of individual pieces of a software development lifecycle. It does not mean the release manager performs each and every piece, it means the release manager has oversight and influence because the end-game is a successful software enhancin a useable environment. It means the release manager, not the developer or development manager, has the final call if something goes into a software release. Of course, this is not a process of autocracy or dictation of absolute rule, it's cooperative effort. But the release manager must have the final authority to make a decision if something is ready to be added to the bigger piece, the overall symphony of software changes being considered for package and release. It also goes without saying a release manager, like a conductor, must have full autonomy and isolation from other software groups. A conductor is the one on the podium waving a little stick at the each section and cueing them for their parts, not yelling from the back of the room while also playing a tuba and taking direction from the horn section. I have personally seen where release managers are relegated to being considered little more than coordinators, red-tapers to "satisfy" the demands of an audit group without being bothered to actually respect all that a release manager gives a group willing to employ them fully. In this dysfunctional scenario, development managers, project managers, business users, and other stakeholders have been given nearly full clearance to demand and push their agendas forward, causing a tail-wagging-the-dog scenario where an inherent conflict will ensue. Depending on the strength, determination for peace, and willingness to overlook a built-in expectation that is wrong, the release manager here must face the crafted conflict head-on and diffuse it as quickly as possible. Then, the release manager must clearly make a case why a change cannot be released without negative impact to all parties involved. If a political agenda is solely driving a software release, there IS no symphony, there is no "software lifecycle". It's just out-of-tune noise. More importantly, there is no real conductor. Sometimes, just wanting to make a beautiful sound is not enough. If you are a release manager, are you freed up enough to move, to conduct the sections of software creation to ensure a solid release performance is possible? If not, it's time to take stock in what your role actually is and see if that is what you truly want to achieve in your position. If you are, then you can successfully build your career and that of the people in your groups to create truly beautiful software (music) together.

    Read the article

  • Odd MVC 4 Beta Razor Designer Issue

    - by Rick Strahl
    This post is a small cry for help along with an explanation of a problem that is hard to describe on twitter or even a connect bug and written in hopes somebody has seen this before and any ideas on what might cause this. Lots of helpful people had comments on Twitter for me, but they all assumed that the code doesn't run, which is not the case - it's a designer issue. A few days ago I started getting some odd problems in my MVC 4 designer for an app I've been working on for the past 2 weeks. Basically the MVC 4 Razor designer keeps popping me errors, about the call signature to various Html Helper methods being incorrect. It also complains about the ViewBag object and not supporting dynamic requesting to load assemblies into the project. Here's what the designer errors look like: You can see the red error underlines under the ViewBag and an Html Helper I plopped in at the top to demonstrate the behavior. Basically any HtmlHelper I'm accessing is showing the same errors. Note that the code *runs just fine* - it's just the designer that is complaining with Errors. What's odd about this is that *I think* this started only a few days ago and nothing consequential that I can think of has happened to the project or overall installations. These errors aren't critical since the code runs but pretty annoying especially if you're building and have .csHtml files open in Visual Studio mixing these fake errors with real compiler errors. What I've checked Looking at the errors it indeed looks like certain references are missing. I can't make sense of the Html Helpers error, but certainly the ViewBag dynamic error looks like System.Core or Microsoft.CSharp assemblies are missing. Rest assured they are there and the code DOES run just fine at runtime. This is a designer issue only. I went ahead and checked the namespaces that MVC has access to in Razor which lives in the Views folder's web.config file: /Views/web.config For good measure I added <system.web.webPages.razor> <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, <split for layout> Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <pages pageBaseType="System.Web.Mvc.WebViewPage"> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> <add namespace="System.Linq" /> <add namespace="System.Linq.Expressions" /> <add namespace="ClassifiedsBusiness" /> <add namespace="ClassifiedsWeb"/> <add namespace="Westwind.Utilities" /> <add namespace="Westwind.Web" /> <add namespace="Westwind.Web.Mvc" /> </namespaces> </pages> </system.web.webPages.razor> For good measure I added System.Linq and System.Linq.Expression on which some of the Html.xxxxFor() methods rely, but no luck. So, has anybody seen this before? Any ideas on what might be causing these issues only at design time rather, when the final compiled code runs just fine?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Razor  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Java Resources for Windows Azure

    - by BuckWoody
    Windows Azure is a Platform as a Service – a PaaS – that runs code you write. That code doesn’t just mean the languages on the .NET platform – you can run code from multiple languages, including Java. In fact, you can develop for Windows and SQL Azure using not only Visual Studio but the Eclipse Integrated Development Environment (IDE) as well.  Although not an exhaustive list, here are several links that deal with Java and Windows Azure: Resource Link Windows Azure Java Development Center http://www.windowsazure.com/en-us/develop/java/  Java Development Guidance http://msdn.microsoft.com/en-us/library/hh690943(VS.103).aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Running a Java Environment on Windows Azure http://blogs.technet.com/b/port25/archive/2010/10/28/running-a-java-environment-on-windows-azure.aspx  Run Java with Jetty in Windows Azure http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx  Using the plugin for Eclipse http://blogs.msdn.com/b/craig/archive/2011/03/22/new-plugin-for-eclipse-to-get-java-developers-off-the-ground-with-windows-azure.aspx  Run Java with GlassFish in Windows Azure http://blogs.msdn.com/b/dachou/archive/2011/01/17/run-java-with-glassfish-in-windows-azure.aspx  Improving experience for Java developers with Windows  Azure http://blogs.msdn.com/b/interoperability/archive/2011/02/23/improving-experience-for-java-developers-with-windows-azure.aspx  Java Access to SQL Azure via the JDBC Driver for SQL  Server http://blogs.msdn.com/b/brian_swan/archive/2011/03/29/java-access-to-sql-azure-via-the-jdbc-driver-for-sql-server.aspx  How to Get Started with Java, Tomcat on Windows Azure http://blogs.msdn.com/b/usisvde/archive/2011/03/04/how-to-get-started-with-java-tomcat-on-windows-azure.aspx  Deploying Java Applications in Azure http://blogs.msdn.com/b/mariok/archive/2011/01/05/deploying-java-applications-in-azure.aspx  Using the Windows Azure Storage Explorer in Eclipse http://blogs.msdn.com/b/brian_swan/archive/2011/01/11/using-the-windows-azure-storage-explorer-in-eclipse.aspx  Windows Azure Tomcat Solution Accelerator http://archive.msdn.microsoft.com/winazuretomcat  Deploying a Java application to Windows Azure with  Command-line Ant http://java.interoperabilitybridges.com/articles/deploying-a-java-application-to-windows-azure-with-command-line-ant  Video: Open in the Cloud: Windows Azure and Java http://channel9.msdn.com/Events/PDC/PDC10/CS10  AzureRunMe  http://azurerunme.codeplex.com/  Windows Azure SDK for Java http://www.interoperabilitybridges.com/projects/windows-azure-sdk-for-java  AppFabric SDK for Java http://www.interoperabilitybridges.com/projects/azure-java-sdk-for-net-services  Information Cards for Java http://www.interoperabilitybridges.com/projects/information-card-for-java  Apache Stonehenge http://www.interoperabilitybridges.com/projects/apache-stonehenge  Channel 9 Case Study on Java and Windows Azure http://www.microsoft.com/casestudies/Windows-Azure/Gigaspaces/Solution-Provider-Streamlines-Java-Application-Deployment-in-the-Cloud/400000000081   

    Read the article

  • Welcome to the Oracle EMEA Partner Community for Exadata!

    - by javier.puerta(at)oracle.com
      The EMEA Partner Community for Exadata is the place where partners in Europe, Middle East and Africa can share experiences and best practices about selling and implementing Exadata projects. You will also receive first-hand information from Oracle on products, training and tools that can help you better market, sell and implement your Exadata-based projects and services    Who should join the Community? Community membership is for individuals. If you are working for a company that is an Oracle partner and your job is selling, implementing or supporting Exadata projects in EMEA then this community is for you.    How is this different from the Oracle Exadata Knowledge Zone? The Oracle Exadata Knowledge Zone is the fundamental source of information from Oracle for partners interested in specializing on Exadata. It is higly recommended that you get access to the Knowledge Zones related to the product areas of your interest. To get access to any of the Knowledge Zones an application must be completed by the Partner Program Administrator for your company. The Exadata Partner Community complements the Knowledge Zone by providing partners with information which is specific for the EMEA market (market, references, training, events,..) and it is also a mechanism to share experiences and best practices among partners in marketing, selling, implementing and supporting Exadata projects.   How to join?  For you to be able to register as an individual, your company must be member of the Oracle PartnerNetwork (OPN) and should be working towards becoming OPN Specialized in Exadata. If this is the case then Join the EMEA Exadata Partner Community Now! If your company is not an OPN member yet, then Join Oracle PartnerNetwork first.   How do you get access to the information for the community members? We use two mechanisms to provide and share information: The EMEA Exadata Partner Community blog. This is a public blog and we use it to provide  quick and easy communication to the community members. For detailed or restricted material we will point you to a restricted area. The EMEA Exadata Partner Community Collaborative Workspace. This is an area with restricted access that only community members can access. It contains materials from community events, sales kits, implementation experiences,... reserved to community members. It also allows for partners to share content and collaborate with other community members. You will get access to this restricted area when you register as a member of the EMEA Exadata Partner Community     Need help? I hope that you will find useful the resources and the experience exchange provided by the community. If you need help or any further clarification, don't hesitate to contact me!  Javier Puerta ([email protected])Director Core Technology Partner ProgramsAlliances & Channels EMEAPhone: +34916312141 Mobile: +34609062373   

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam

    - by JuergenKress
    OOW 2013 Summary for Fusion Middleware Architects & Administrators by Simon Haslam This September during Oracle OpenWorld 2013 the weather in San Francisco, as you see can from the photo, was exceptionally sunny. The dramatic final few days of the Americas Cup sailing competition, being held every day in the bay, coincided with the conference and meant that there was almost a holiday feel to the whole event. Here's my annual round-up of what I think was most interesting at OpenWorld 2013 for Fusion Middleware architects and administrators; I hope you find it useful and if you think I've missed something please add a comment! WebLogic and Cloud Application Foundation (CAF) The big WebLogic release of the year has already happened a few months ago with 12.1.2 so I won't duplicate that here. Will Lyons discussed the WebLogic and Coherence roadmap which essentially is that 12.1.3 will probably be released to coincide with SOA 12c next year and that 12.1.4, the next feature-rich WebLogic release, is more likely to be in 2015. This latter release will probably include full Java EE 7 support, have enhancements for multi-tenancy and further auto-scaling features to support increased density (i.e. more WebLogic usage for the same amount of hardware). There's a new Oracle Virtual Assembly Builder (OVAB) out already and an Oracle Traffic Director (OTD) 12c release round the corner too. Also of relevance to administrators is that Oracle has increased the support lifetime for Fusion Middleware 11g (e.g. WebLogic 10.3.6) so that Premier Support will now run to the end of 2018 and Extended Support until 2021 - this should remove any Oracle-driven pressure to upgrade at least. Java Mission Control Java Mission Control (JMC) is the HotSpot Java 7 version of JRockit 6 Mission Control, a very nice performance monitoring tool from Oracle's BEA acquisition. Flight Recorder is a feature built into the JVM which records diagnostic events into, typically, a circular buffer which can then be used for historical analysis, particularly in the case of a JVM crash or hang. It's been available separately for WebLogic only for perhaps a year now but, more significantly, it now includes JVM events and was bundled in with JDK7 Update 40 a few weeks ago. I attended a couple of interesting Java One sessions on JMC/Flight Recorder and have to say it's looking really good - it has all the previous JRMC features except for memory leak detector, plus some enhancements around operative sets and ECID filtering I think. Marcus also showed how you could add your own events into flight recorder by building your own event class - they are then available for graphing alongside all the other events in JMC. This uses a currently an unsupported/undocumented API, but it's also the same one that WebLogic uses for WLDF events so I imagine it is stable. I'm not sure quite whether this would be useful to custom applications, as opposed to infrastructure services or ISV packaged applications, but it was a very nice demonstration. I've been testing JMC / FR enabling on several environments recently and my confidence is growing - it feels robust and I think could very soon be part of my standard builds. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OOW,Simon Haslam,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following statement is incorrect? a) Identity value can be negative. b) Identity value can have negative interval. c) Identity value can be of datatype VARCHAR d) Identity value can have increment interval larger than 1 Query Hints: BIG HINT POST A simple way to determine if a table contains an identity field is to use the SSMS Object Explorer Design Interface. Navigate to the table, then right-click it and choose Design from the pop-up window. When your design tab opens, select the first field in the table to view its list of properties in the lower pane of the tab (In this case the field is ProductID). Look to see if the Identity Specification property in the lower pane is set to either yes or no. SQL Server will allow you to utilize IDENTITY_INSERT with just one table at a time. After you’ve completed the needed work, it’s very important to reset the IDENTITY_INSERT back to OFF. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 2. SQL Joes 2 Pros Development Series – Output Clause in Simple Examples SQL Joes 2 Pros Development Series – Ranking Functions – Advanced NTILE in Detail SQL Joes 2 Pros Development Series – Ranking Functions – RANK( ), DENSE_RANK( ), and ROW_NUMBER( ) SQL Joes 2 Pros Development Series – Advanced Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Aggregates with the Over Clause SQL Joes 2 Pros Development Series – Overriding Identity Fields – Tricks and Tips of Identity Fields SQL Joes 2 Pros Development Series – Many to Many Relationships Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Use Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)

    - by Trevor Bekolay
    By default, Ubuntu creates a folder called Desktop in your home directory that gets displayed on your desktop. What if you want to use something else, like your Dropbox folder? Here we look at how to use any folder for your desktop. Not only can you change your desktop folder, you can change the location of any other folder Ubuntu creates for you in your home folder, like Documents or Music – and this works in any Linux distribution using the Gnome desktop manager. In this example, we’re going to change desktop to show our Dropbox folder. Open your home folder in a File Browser by clicking on Places > Home Folder. In the Home Folder, open the .config folder. By default, .config is hidden, so you may have to show hidden folders (temporarily) by clicking on View > Show Hidden Files. Then open the .config folder by double-clicking on it. Now open the user-dirs.dirs file… If double-clicking on it does not open it in a text editor, right-click on it and choose Open with Other Application… and find a text editor like Gedit. Change the entry associated with XDG_DESKTOP_DIR to the folder you want to be shown as your desktop. In our case, this is $HOME/Dropbox. Note: The “~” shortcut for the home directory won’t work in this file (use $HOME for that), but an absolute path (i.e. a path starting with “/”) will work. Feel free to change the locations of the other folders as well. Save and close user-dirs.dirs. At this point you can either log off and then log back on to get your desktop back, or open a terminal window Applications > Accessories > Terminal and enter: killall nautilus Nautilus (the file manager in Gnome) will restart itself and display your newly chosen folder as the desktop! This is a cool trick to use any folder for your Ubuntu desktop. What did you use as your desktop folder? Let us know in the comments! Similar Articles Productive Geek Tips Sync Your Pidgin Profile Across Multiple PCs with DropboxAdd "My Dropbox" to Your Windows 7 Start MenuCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesAdd "My Computer" to Your Windows 7 / Vista TaskbarCheck your Disk Usage on Ubuntu with Disk Usage Analyzer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • JDBC Triggers

    - by Tim Dexter
    Received a question from a customer last week, they were using the new rollup patch on top of 10.1.3.4.1. What are these boxes for? Don't you know? Surely? Well, they are for ... that new functionality, you know it's in the user docs, that thingmabobby doodah. OK, I dont know either, I can have a guess but let me check first. Serveral IM sessions, emails and a dig through the readme for the new patch and I had my answer. Its not in the official documentation, yet. Leslie is on the case. The two fields were designed to allow an Admin to set a users context attributes before a connection is made to a database and for un-setting the attributes after the connection is broken by the extraction engine. We got a sample from the Enterprise Manager team on how they will be using it with their VPD connections. FUNCTION bip_to_em_user (user_name_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETEMUSERCONTEXT(user_name_in, MGMT_USER.OP_SET_IDENTIFIER); return TRUE; END bip_to_em_user; And used in the jdbc data source definition like this (pre-process function): sysman.mgmt_bip.bip_to_em_user(:xdo_user_name) You, of course can call any function that is going to return a boolean value, another example might be. FUNCTION set_per_process_username (username_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETUSERCONTEXT(username_in); return TRUE; END set_per_process_username Just use your own function/package to set some user context. Very grateful for the mail from Leslie on the EM team's usage but I had to try it out. Rather than set up a VPD, I opted for a simpler test. Can I log the comings and goings of users and their queries using the same pre-process text box. Reaching back into the depths of my developer brain to remember some pl/sql, it was not that deep and I came up with: CREATE OR REPLACE FUNCTION BIPTEST (user_name_in IN VARCHAR2, smode IN VARCHAR2) RETURN BOOLEAN AS BEGIN INSERT INTO LOGTAB VALUES(user_name_in, sysdate,smode); RETURN true; END BIPTEST; To call it in the pre-fetch trigger. BIPTEST(:xdo_user_name) Not going to set the pl/sql world alight I know, but you get the idea. As a new connection is made to the database its logged in the LOGTAB table. The SMODE value just sets if its an entry or an exit. I used the pre- and post- boxes. NAME UPDATE_DATE S_FLAG oracle 14-MAY-10 09.51.34.000000000 AM Start oracle 14-MAY-10 10.23.57.000000000 AM Finish administrator 14-MAY-10 09.51.38.000000000 AM Start administrator 14-MAY-10 09.51.38.000000000 AM Finish oracle 14-MAY-10 09.51.42.000000000 AM Start oracle 14-MAY-10 09.51.42.000000000 AM Finish It works very well, I had some fun trying to find a nasty query for the extraction engine so that the timestamps from in to out actually had a difference. That engine is fast! The only derived value you can pass from BIP is :xdo_user_name. None of the other server values are available. Connection pools are not currently supported but planned for a future release. Now you know what those fields are for and look for some official documentation, rather than my ramblings, coming soon!

    Read the article

  • Q&A: Oracle's Paul Needham on How to Defend Against Insider Attacks

    - by Troy Kitch
    Source: Database Insider Newsletter: The threat from insider attacks continues to grow. In fact, just since January 1, 2014, insider breaches have been reported by a major consumer bank, a major healthcare organization, and a range of state and local agencies, according to the Privacy Rights Clearinghouse.  We asked Paul Needham, Oracle senior director, product management, to shed light on the nature of these pernicious risks—and how organizations can best defend themselves against the threat from insider risks. Q. First, can you please define the term "insider" in this context? A. According to the CERT Insider Threat Center, a malicious insider is a current or former employee, contractor, or business partner who "has or had authorized access to an organization's network, system, or data and intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization's information or information systems."  Q. What has changed with regard to insider risks? A. We are actually seeing the risk of privileged insiders growing. In the latest Independent Oracle Users Group Data Security Survey, the number of organizations that had not taken steps to prevent privileged user access to sensitive information had grown from 37 percent to 42 percent. Additionally, 63 percent of respondents say that insider attacks represent a medium-to-high risk—higher than any other category except human error (by an insider, I might add). Q. What are the dangers of this type of risk? A. Insiders tend to have special insight and access into the kinds of data that are especially sensitive. Breaches can result in long-term legal issues and financial penalties. They can also damage an organization's brand in a way that directly impacts its bottom line. Finally, there is the potential loss of intellectual property, which can have serious long-term consequences because of the loss of market advantage.  Q. How can organizations protect themselves against abuse of privileged access? A. Every organization has privileged users and that will always be the case. The questions are how much access should those users have to application data stored in the database, and how can that default access be controlled? Oracle Database Vault (See image) was designed specifically for this purpose and helps protect application data against unauthorized access.  Oracle Database Vault can be used to block default privileged user access from inside the database, as well as increase security controls on the application itself. Attacks can and do come from inside the organization, and they are just as likely to come from outside as attempts to exploit a privileged account.  Using Oracle Database Vault protection, boundaries can be placed around database schemas, objects, and roles, preventing privileged account access from being exploited by hackers and insiders.  A new Oracle Database Vault capability called privilege analysis identifies privileges and roles used at runtime, which can then be audited or revoked by the security administrators to reduce the attack surface and increase the security of applications overall.  For a more comprehensive look at controlling data access and restricting privileged data in Oracle Database, download Needham's new e-book, Securing Oracle Database 12c: A Technical Primer. 

    Read the article

  • SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28

    - by pinaldave
    Glenn Berry works as a Database Architect at NewsGator Technologies in Denver, CO. He is a SQL Server MVP, and has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS. He is also an Adjunct Faculty member at University College – University of Denver, where he has been teaching since 2000. He is one wonderful blogger and often blogs at here. I am big fan of the Dynamic Management Views (DMV) scripts of Glenn. His script are extremely popular and the reality is that he has inspired me to start this series with his famous DMV which I have mentioned in very first  wait stats blog post (I had forgot to request his permission to re-use the script but when asked later on his whole hearty approved it). Here is is his excellent blog post on this subject of wait stats: Analyzing cumulative wait stats in SQL Server 2005 and above has become a popular and effective technique for diagnosing performance issues and further focusing your troubleshooting and diagnostic  efforts.  Rather than just guessing about what resource(s) that SQL Server is waiting on, you can actually find out by running a relatively simple DMV query. Once you know what resources that SQL Server is spending the most time waiting on, you can run more specific queries that focus on that resource to get a better idea what is causing the problem. I do want to throw out a few caveats about using wait stats as a diagnostic tool. First, they are most useful when your SQL Server instance is experiencing performance problems. If your instance is running well, with no indication of any resource pressure from other sources, then you should not worry that much about what the top wait types are. SQL Server will always be waiting on some resource, but many wait types are quite benign, and can be safely ignored. In spite of this, I quite often see experienced DBAs obsessing over the top wait type, even when their SQL Server instance is running extremely well. Second, I often see DBAs jump to the wrong conclusion based on seeing a particular well-known wait type. A good example is CXPACKET waits. People typically jump to the conclusion that high CXPACKET waits means that they should immediately change their instance-level MADOP setting to 1. This is not always the best solution. You need to consider your workload type, and look carefully for any important “missing” indexes that might be causing the query optimizer to use a parallel plan to compensate for the missing index. In this case, correcting the index problem is usually a better solution than changing MAXDOP, since you are curing the disease rather than just treating the symptom. Finally, you should get in the habit of clearing out your cumulative wait stats with the  DBCC SQLPERF(‘sys.dm_os_wait_stats’, CLEAR); command. This is especially important if you have made an configuration or index changes, or if your workload has changed recently. Otherwise, your cumulative wait stats will be polluted with the old stats from weeks or months ago (since the last time SQL Server was started or the stats were cleared).  If you make a change to your SQL Server instance, or add an index, you should clear out your wait stats, and then wait a while to see what your new top wait stats are. At any rate, enjoy Pinal Dave’s series on Wait Stats. This blog post has been written by Glenn Berry (Twitter | Blog) Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • LINQ and ordering of the result set

    - by vik20000in
    After filtering and retrieving the records most of the time (if not always) we have to sort the record in certain order. The sort order is very important for displaying records or major calculations. In LINQ for sorting data the order keyword is used. With the help of the order keyword we can decide on the ordering of the result set that is retrieved after the query.  Below is an simple example of the order keyword in LINQ.     string[] words = { "cherry", "apple", "blueberry" };     var sortedWords =         from word in words         orderby word         select word; Here we are ordering the data retrieved based on the string ordering. If required the order can also be made on any of the property of the individual like the length of the string.     var sortedWords =         from word in words         orderby word.Length         select word; You can also make the order descending or ascending by adding the keyword after the parameter.     var sortedWords =         from word in words         orderby word descending         select word; But the best part of the order clause is that instead of just passing a field you can also pass the order clause an instance of any class that implements IComparer interface. The IComparer interface holds a method Compare that Has to be implemented. In that method we can write any logic whatsoever for the comparision. In the below example we are making a string comparison by ignoring the case. string[] words = { "aPPLE", "AbAcUs", "bRaNcH", "BlUeBeRrY", "cHeRry"}; var sortedWords = words.OrderBy(a => a, new CaseInsensitiveComparer());  public class CaseInsensitiveComparer : IComparer<string> {     public int Compare(string x, string y)     {         return string.Compare(x, y, StringComparison.OrdinalIgnoreCase);     } }  But while sorting the data many a times we want to provide more than one sort so that data is sorted based on more than one condition. This can be achieved by proving the next order followed by a comma.     var sortedWords =         from word in words         orderby word , word.length         select word; We can also use the reverse() method to reverse the full order of the result set.     var sortedWords =         from word in words         select word.Reverse();                                 Vikram

    Read the article

  • Building Extensions Using E-Business Suite SDK for Java

    - by Sara Woodhull
    We’ve just released Version 2.0.1 of Oracle E-Business Suite SDK for Java.  This new version has several great enhancements added after I wrote about the first version of the SDK in 2010.  In addition to the AppsDataSource and Java Authentication and Authorization Service (JAAS) features that are in the first version, the Oracle E-Business Suite SDK for Java now provides: Session management APIs, so you can share session information with Oracle E-Business Suite Setup script for UNIX/Linux for AppsDataSource and JAAS on Oracle WebLogic Server APIs for Message Dictionary, User Profiles, and NLS Javadoc for the APIs (included with the patch) Enhanced documentation included with Note 974949.1 These features can be used with either Release 11i or Release 12.  References AppsDataSource, Java Authentication and Authorization Service, and Utilities for Oracle E-Business Suite (Note 974949.1) FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1) What's new in those references? Note 974949.1 is the place to look for the latest information as we come out with new versions of the SDK.  The patch number changes for each release.  Version 2.0.1 is contained in Patch 13882058, which is for both Release 11i and Release 12.  Note 974949.1 includes the following topics: Applying the latest patch Using Oracle E-Business Suite Data Sources Oracle E-Business Suite Implementation of Java Authentication and Authorization Service (JAAS) Utilities Error loggingSession management  Message Dictionary User profiles Navigation to External Applications Java EE Session Management Tutorial For those of you using the SDK with Oracle ADF, besides some Oracle ADF-specific documentation in Note 974949.1, we also updated the ADF Integration FAQ as well. EBS SDK for Java Use Cases The uses of the Oracle E-Business Suite SDK for Java fall into two general scenarios for integrating external applications with Oracle E-Business Suite: Application sharing a session with Oracle E-Business Suite Independent application (not shared session) With an independent application, the external application accesses Oracle E-Business  Suite data and server-side APIs, but it has a completely separate user interface. The external application may also launch pages from the Oracle E-Business Suite home page, but after the initial launch there is no further communication with the Oracle E-Business Suite user interface. Shared session integration means that the external application uses an Oracle E-Business Suite session (ICX session), shares session context information with Oracle E-Business Suite, and accesses Oracle E-Business Suite data. The external application may also launch pages from the Oracle E-Business Suite home page, or regions or pages from the external application may be embedded as regions within Oracle Application Framework pages. Both shared session applications and independent applications use the AppsDataSource feature of the Oracle E-Business Suite SDK for Java. Independent applications may also use the Java Authentication and Authorization (JAAS) and logging features of the SDK. Applications that are sharing the Oracle E-Business Suite session use the session management feature (instead of the JAAS feature), and they may also use the logging, profiles, and Message Dictionary features of the SDK.  The session management APIs allow you to create, retrieve, validate and cancel an Oracle E-Business Suite session (ICX session) from your external application.  Session information and context can travel back and forth between Oracle E-Business Suite and your application, allowing you to share session context information across applications. Note: Generally you would use the Java Authentication and Authorization (JAAS) feature of the SDK or the session management feature, but not both together. Send us your feedback Since the Oracle E-Business Suite SDK for Java is still pretty new, we’d like to know about who is using it and what you are trying to do with it.  We’d like to get this type of information: customer name and brief use case configuration and technologies (Oracle WebLogic Server or OC4J, plain Java, ADF, SOA Suite, and so on) project status (proof of concept, development, production) any other feedback you have about the SDK You can send me your feedback directly at Sara dot Woodhull at Oracle dot com, or you can leave it in the comments below.  Please keep in mind that we cannot answer support questions, so if you are having specific issues, please log a service request with Oracle Support. Happy coding! Related Articles New Whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express To Customize or Not to Customize? New Whitepaper: Upgrading your Customizations to Oracle E-Business Suite Release 12 ATG Live Webcast: Upgrading your EBS 11i Customizations to Release 12

    Read the article

  • HP and Microsoft: That&rsquo;s What Friends Are For ?

    - by andrewbrust
    Today, HP pre-announced the second coming out for its recently acquired Palm webOS mobile operating system.  I happen to think webOS is quite good, and when the Palm Pre first came out, I thought it a worthwhile phone.  I was worried though that the platform would never attract the developer mindshare it needed to be competitive, and that turned out to be the case.  But then HP acquired Palm and announced it would be revamping the webOS offering, not only on phones, but also on tablets.  It later announced that it would also use webOS as an embedded solution on HP printers. The timing of this came shortly after HP had announced it would be producing a “Slate” product running Windows 7. After the Palm deal, HP became vague about whether the Windows-powered slate would actually come out.  They did, in fact, bring the Slate 500 to market, but by some accounts, they only built 5000 units. Another recent awkward moment between HP and Microsoft: HP withdrew itself from the Windows Home Server ecosystem.  That one hurt, as they were the dominant OEM there.  But Microsoft’s decision to kill Drive Extender had driven away many parties, not just HP. On Wednesday, HP came out with their TouchPad, and new phone models.  Not a nice thing for Windows Phone 7, but other OEMs are taking a wait and see attitude there too, I suppose.  There was one more zinger though, and it was bigger: HP announced they’d be porting webOS to PCs. No Windows Phone 7? OK. No Windows Home Server?  Whatcha gonna do?  But no Windows 7 either?  From HP?  What comes after that, no ink and toner? Some people think Microsoft’s been around too long to be relevant.  But HP started out making oscilloscopes!  The notion that HP is too cool for Windows school is a it far-fetched.  This is the company that bought EDS. This is the company that bought Compaq.  And Compaq was the company that bought Digital Equipment Corporation.  Somehow, I don’t think the VT 220 outclasses Windows PCs. What could possibly be going on?  My sense is that HP wants to put webOS on PCs that also have Windows, and that people will buy because they have Windows.  And for every one of those sold, HP gets to count, technically speaking, another webOS unit in the install base.  webOS is really nice, as I said.  But being good isn’t good enough when you are trying to get market share.  Number of units shipped matters.  The question is whether counting PCs with webOS installed, but dormant, is helpful to HP’s cause.  Seems like a funny way to account for market share, and a strange way to treat a big partner in Redmond.

    Read the article

  • How to future-proof my touch-enabled web application?

    - by Rice Flour Cookies
    I recently went out and purchased a touch-screen monitor with the intention of learning how to program touch-enabled web applications. I had reviewed the MDN documentation about touch events, as well as the W3C specification. To get started, I wrote a very short test page with two event handlers: one for the mousedown event and one for the touchstart event. I fired up the web page in IE and touched the document and found that only the mousedown event fired. I saw the same behavior with Firefox, only to find out later that Firefox can be set to enable the touchstart event using about:config. When touch events are enabled, the touchstart event fires, but not mousedown. Chrome was even stranger: it fired both events when I touched the document: touchstart and mousedown, in that order. Only on my Android phone does it appear to be the case that only the touchstart event fires when I touch the document. I did a a Google search and ended up on two interesting pages. First, I found the page on CanIUse for touch events: http://caniuse.com/#feat=touch Can I Use clearly indicates that IE does not support touch events as of this writing, and Firefox only supports touch events if they are manually enabled. Furthermore, all four browsers I mentioned treat the touch in a completely different way. It boils down to this: IE: simulated mouse click Firefox with touch disabled: simulated mouse click Firefox with touch enabled: touch event Chrome: touch event and simulated mouse click Android: touch event What is more frustrating is that Google also found a Microsoft page called RethinkIE. RethinkIE brags about touch support in IE; as a matter of fact, one of their slogans is "Touch the Web". It links to a number of touch-based application. I followed some of these links, and as best I can tell, it's just like CanIUse described; no proper touch support; just simulated mouse clicks. The MDN (https://developer.mozilla.org/en-US/docs/Web/API/Touch) and W3C (http://www.w3.org/TR/touch-events/) documentation describe a far richer interface; an interface that doesn't just simulate mouse clicks, but keeps track of multiple touches at once, the contact area, rotation, and force of each touch, and unique identifiers for each touch so that they can be tracked individually. I don't see how simulated mouse clicks can ever touch the above described functionality, which, once again, is part of the W3C specification, although it is listed as "non-normative", meaning that a browser can claim to be standards-compliant without implementing it. (Why bother making it part of the standard, then?) What motivated my research is that I've written an HTML5 application that doesn't work on Android because Android doesn't fire mouse events. I'm now afraid to try to implement touch for my application because the browsers all behave so differently. I imagine that at some time in the future, the browsers might start handling touch similarly, but how can I tell how they might be handled in the future short of writing code to handle the behavior of each individual browser? Is it possible to write code today that will work with touch-enabled browsers for years to come? If so, how?

    Read the article

  • Today at Oracle OpenWorld 2012

    - by Scott McNeil
    We have another full day of great Oracle OpenWorld keynotes, sessions, demos and customer presentations in the Seen and Be Heard threater. Here's a quick run down of what's happening today with Oracle Enterprise Manager 12c: Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) General Session Tues 2 Oct, 2012 Time Title Location 11:45 AM - 12:45 PM General Session: Using Oracle Enterprise Manager to Manage Your Own Private Cloud Moscone South - 103* 1:15 PM - 2:15 PM General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 Conference Session Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Oracle Exadata/Oracle Enterprise Manager 12c: Journey into Oracle Database Cloud Moscone West - 3018 10:15 AM - 11:15 AM Bulletproof Your Application Upgrades with Secure Data Masking and Subsetting Moscone West - 3020 10:15 AM - 11:15 AM Oracle Enterprise Manager 12c: Architecture Deep Dive, Tips, and Techniques Moscone South - 303 11:45 AM - 12:45 PM RDBMS Forensics: Troubleshooting with Active Session History Moscone West - 3018 11:45 AM - 12:45 PM Building and Operationalizing Your Data Center Environment with Oracle Exalogic Moscone South - 309 11:45 AM - 12:45 PM Securely Building a National Electronic Health Record: Singapore Case Study Westin San Francisco - Concordia 1:15 PM - 2:15 PM Managing Heterogeneous Environments with Oracle Enterprise Manager Moscone West - 3018 1:15 PM - 2:15 PM Complete Oracle WebLogic Server Management with Oracle Enterprise Manager 12c Moscone South - 309 1:15 PM - 2:15 PM Database Lifecycle Management with Oracle Enterprise Manager 12c Moscone West - 3020 1:15 PM - 2:15 PM Best Practices, Key Features, Tips, Techniques for Oracle Enterprise Manager 12c Upgrade Moscone South - 307 1:15 PM - 2:15 PM Enterprise Cloud with CSC’s Foundation Services for Oracle and Oracle Enterprise Manager 12c Moscone South - 236 5:00 PM - 6:00 PM Deep Dive 3-D on Oracle Exadata Management: From Discovery to Deployment to Diagnostics Moscone West - 3018 5:00 PM - 6:00 PM Everything You Need to Know About Monitoring and Troubleshooting Oracle GoldenGate Moscone West - 3005 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c: The Nerve Center of Oracle Cloud Moscone West - 3020 5:00 PM - 6:00 PM Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Moscone West - 2016 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c Cloud Control Performance Pages: Falling in Love Again Moscone West - 3014 Hands-on Labs Tues 2 Oct, 2012 Time Title Location 10:15 AM - 12:45 PM Managing the Cloud with Oracle Enterprise Manager 12c Marriott Marquis - Salon 5/6 1:15 PM - 2:15 PM Database Performance Tuning Hands-on Lab Marriott Marquis - Salon 5/6 Scene and Be Heard Theater Session Tues 2 Oct, 2012 Time Title Location 10:30 AM - 10:50 AM Start Small, Grow Big: Hands-On Oracle Private Cloud—A Step-by-Step Guide Moscone South Exhibition Hall - Booth 2407 12:30 PM - 12:50 PM Blue Medora’s Oracle Enterprise Manager Plug-in for VMware vSphere Monitoring Moscone South Exhibition Hall - Booth 2407 Demos Demo Location Application and Infrastructure Testing Moscone West - W-092 Automatic Application and SQL Tuning Moscone South, Left - S-042 Automatic Fault Diagnostics Moscone South, Left - S-036 Automatic Performance Diagnostics Moscone South, Left - S-033 Complete Care for Oracle Using My Oracle Support Moscone South, Left - S-031 Complete Cloud Lifecycle Management Moscone North, Upper Lobby - N-019 Complete Database Lifecycle Management Moscone South, Left - S-030 Comprehensive Infrastructure as a Service via Oracle Enterprise Manager Moscone South, Left - S-045 Data Masking and Data Subsetting Moscone South, Left - S-034 Database Testing with Oracle Real Application Testing Moscone South, Left - S-041 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Mission-Critical, SPARC-Powered Infrastructure as a Service Moscone South, Center - S-157 Oracle E-Business Suite, Siebel, JD Edwards, and PeopleSoft Management Moscone West - W-084 Oracle Enterprise Manager Cloud Control 12c Overview Moscone South, Left - S-039 Oracle Enterprise Manager: Complete Data Center Management Moscone South, Left - S-040 Oracle Exadata Management Moscone South, Center - Oracle Exalogic Management Moscone South, Center - Oracle Fusion Applications Management Moscone West - W-018 Oracle Real User Experience Insight Moscone South, Right - S-226 Oracle WebLogic Server Management and Java Diagnostics Moscone South, Right - S-206 Platform as a Service Using Oracle Enterprise Manager Moscone North, Upper Lobby - N-020 SOA Management Moscone South, Right - S-225 Self-Service Application Testing on Private and Public Clouds Moscone West - W-110 Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to useUse any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

< Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >