Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 452/883 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • We're Back: I'm Here

    - by [email protected]
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • The August '14 Oracle Linux Newsletter is Now Available

    - by Chris Kawalek
    The August 2014 edition of the Oracle Linux Newsletter is now available! Chock full of fantastic information, it's your one-stop-shop for catching up on all things Oracle Linux. In this edition: Oracle Linux 7 Now Available Oracle Linux and Oracle Virtualization at Oracle OpenWorld 2014 Technology Preview of OpenStack Icehouse with Oracle Linux and Oracle VM Now Available Using Ksplice as a Diagnostic Tool with Oracle Support Hands-on Lab: How to Migrate from VMware and Red Hat to Oracle Linux and Oracle VM Emirates Nuclear Energy Corporation Boosts Performance—Is Set to Cut Technology Ownership Costs by US$500,000 in Five Years And much more! You can read the latest edition online right now or sign up to get it automatically delivered to your inbox. 

    Read the article

  • What are the advantages and challenges of using HaXe over ActionScript 3?

    - by Arthur Wulf White
    I read this: http://webr3.org/blog/haxe/bitmapdata-vectors-bytearrays-and-optimization/ And this: http://stackoverflow.com/questions/5220045/what-are-the-pro-and-cons-of-using-haxe-over-actionscript-3 http://www.haxenme.org/developers/documentation/actionscript-developers/ I read there are only two books, is that right? http://haxe.org/doc/book I also see that using FlashDevelop you can make an Adobe Air project with HaXe and compile to exe. And begun to wonder from a game making perspective: I saw the example in the first link, are there any additional advantages in performance when using HaXe instead of AS3 for games development (for the web)?

    Read the article

  • Save the Date for the Oracle Storage Community Forum at Oracle OpenWorld

    - by Ritu Chhibber-Oracle
    Dear Partners, Come and meet Oracle's Top Storage Executives, Architects and Fellow Customers & Partners at the Oracle Storage Community Forum at Oracle OpenWorld on October 1, 2014. This special event will feature interactive sessions on Oracle's Application Engineered Storage strategy, product directions, and real-world customer implementations. Discover the possibilities, as only Oracle can co-engineer hardware with Oracle Database and applications to deliver extreme performance, dynamic automation, management efficiency and cost savings. Storage Forum at Oracle OpenWorld Wednesday, October 1, 20143:30 p.m. - 5:00 p.m. Forum5:00 p.m. - 6:00 p.m. Reception Venue:Metreon – City View135 Fourth Street, Suite 4000,San Francisco, CA 94103 For more details and to register, please click here.

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Con Scholarship Winners!

    - by Most Valuable Yak (Rob Volk)
    A few weeks ago, AtlantaMDF offered scholarships for each of our upcoming Pre-conference sessions at SQL Saturday #220. We would like to congratulate the winners! David Thomas SQL Server Security http://sqlsecurity.eventbrite.com/ Vince Bible Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Mostafa Maged Languages of BI http://languagesofbi.eventbrite.com/ Daphne Adams Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Tim Lawrence The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ Thanks to everyone who applied! And once again we must thank Idera's generous sponsorship, and the time and effort made by Bobby Dimmick (w|t) and Brian Kelley (w|t) of Midlands PASS for judging all the applicants. Don't forget, there's still time to attend the Pre-Cons on May 17, 2013! Click on the EventBrite links for more details and to register!

    Read the article

  • Enterprise Library 5.0 Released&hellip;

    - by Shawn Cicoria
    The announcement is up here: http://blogs.msdn.com/agile/archive/2010/04/20/microsoft-enterprise-library-5-0-released.aspx Some of the things on the list of what’s new & improved 1. Redesign of the configuration tool – heck, that thing looked the same since the bits were acquired from Avanade quite a while back – good to see the changes. 2. Logging performance – this is has been 1 of the areas that we all need 3. Configuration improvements: XSD enabled, intelligence (yeah!) 4. Oh, and .NET 4.0 support :)

    Read the article

  • Adding my face to my web-site in Google's search result

    - by Roman Matveev
    I'm trying to accomplish the rich snippet to the template of my future web-site. The data format is review and I used the microdata formatting to add all necessary information to the web-page. The Structured Data Testing Tool delivered rating, author information and review date: However there is no my face image and the sections related to authorship are empty: I made all that recommended to link my Google+ profile to the web-site: I did something wrong? Or I will not be able to see my face in the test tools ever and it will be in the real SERP?

    Read the article

  • Best Practices to Accelerate Oracle VM Server Deployments

    - by Honglin Su
    IOUG (Independent Oracle User Group) Virtualization SIG is hosting the webcast on the best practices of Oracle VM server virtualization. The upcoming event is scheduled on July 11 with the focus on Oracle VM Server on SPARC. Register here. Areas addressed will include recommended practices for installation, maintenance, performance, and reliability.  Topics will include sizing, resource allocation, multiple I/O domain configurations for availability, secure live migration, selection of I/O backends, and I/O virtualization.  To learn the best practices on Oracle VM Server for x86,  watch the session replay here.

    Read the article

  • Windows Phone 7 Development &ndash; Useful Links

    - by David Turner
    Here are some excellent links for anyone developing for Windows Phone 7: J.D. Meier’s Windows Phone Developer Guidance Map – this is immense.  Also check out the Silverlight version Justin Angel’s site – some really great articles on unlocked roms, automation and Continuous Integration Windows Phone 7 Development Best Practices Wiki Jeff Blankenburg’s 31 days of Windows Phone 7 This post of Links to sample code for Windows Phone Tim Heuers blog, particularly this post of Tips and Tricks Kevin Marshall's blog, particularly the epic WP7 Development Tips Part 1 post Code Samples for Windows Phone on MSDN If you have unlocked your phone for development, then you can use the WPConnect tool to connect to the device rather than using the Zune client.  I found it useful to pin a shortcut to WPConnect in my Start Menu. The Performance Counters displayed when you debug your app on a device are useful for seeing things like frame rate and memory usage, this page on MSDN explains what the numbers mean.  Jeff Blankenburg covers this in more details on his blog I also came across this set of links to tutorials recently which looks very useful. Creating Windows Phone 7 Application and Marketplace Icons: http://expression.microsoft.com/en-us/gg317447.aspx

    Read the article

  • Oracle University Neue Kurse (Week 10)

    - by swalker
    In der letzten Woche wurden von Oracle University folgende neue Kurse (bzw. Versionen davon) veröffentlicht: Database RAC & Grid Infrastructure for Oracle Solaris System Administration (1 day) Oracle Database 11g: Performance Tuning (Training On Demand) Development Tools Oracle Database: Program with PL/SQL (Training On Demand) MySQL MySQL for Database Administrators (Training On Demand) Fusion Middleware Oracle WebCenter Portal 11g: Build Portals With Spaces (3 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle BPM 11g Modeling (3 days) Business Intelligence & Datawarehousing Oracle BI Applications 7.9.6: Implementation for Oracle EBS (4 days) Oracle BI Applications 7.9.6: Implementation for Siebel CRM (4 days) Oracle BI 11g R1: Build Repositories (Training on Demand) Fusion Applications Fusion Applications: Extend Applications with ADF (5 days) E-Business Suite R12.x Extend Oracle Applications: Building OA Framework Applications (Training On Demand) PeopleSoft PeopleSoft Integration Tools Rel 8.50 (Training On Demand) Wenn Sie weitere Einzelheiten erfahren oder sich über Kurstermine informieren möchten, wenden Sie sich einfach an Ihr lokales Oracle University-Team in.

    Read the article

  • Virtualising Windows 8 on OS X with VMware Fusion above the 8GB limit

    - by mbrit
    tl;dr - don't.VMware Fusion has an 8GB limit, so on my 16GB MacBook Pro (which is an old one with a 16GB memory upgrade), I wanted to use more than 8GB. You can fudge it my editing the .vmx file and changing the memsize value to whatever you want.However it turns out that if you do this the performance on the whole machine turns into a big. It beachballs all over the place, both in VMware and without. Constant hanging of VMware, fan running hot the whole time, etc.This is just an apocryphal view, but having spent a week and a half with it unusable and me thinking I was going to have to go back to Win7, turn it down back to using 8GB exactly seems to be working much more stably.

    Read the article

  • ADF Sessions at RMOUG this week

    - by shay.shmeltzer
    If you are attending the RMOUG conference this week, you might be interested in checking out some of the sessions we are doing about Oracle ADF:Lynn is delivering:The Fusion Development Platform - Wed at 9:00 (404)Put Your Good Taste Into Action: How to Skin ADF Faces Rich Client Applications - Wed 5:00 (4 c/d)Shay is delivering:From SQL to Rich Web Data Visualization - The Fast Route - Thu at 9:00 (404)Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way - Thu at 10:15 (404)There are also lots of ADF related sessions delivered by customers and partners including:Drinking the Kool-Aid - My Journey to Becoming an ADF BelieverCase Study: Performance Tuning New ADF Applications Using Oracle Application Testing Suite (ATS)Oracle ADF & JDeveloper: Coming of AgeHello Worldwide Web: Your First JSF in JDeveloperMore details see the schedule here.If you are using ADF already, please drop by and let us know what you think. We are always looking for user feedback.

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • ArchBeat Link-o-Rama for 2012-04-04

    - by Bob Rhubart
    Is This How the Execs React to Your Recommendations? blogs.oracle.com "Well then, do your homework next time!" advises Rick Ramsey, and offers a list of Oracle Solaris 11 resources that just might make your next encounter a little less humiliating. WebLogic Server Performance and Tuning: Part I - Tuning JVM | Gokhan Gungor blogs.oracle.com A detailed how-to post from Gokhan Gungor. How to deal with transport level security policy with OSB | Jian Liang blogs.oracle.com Jian Liang shares "a use case for Oracle Service Bus (OSB) 11gPS4 to consume a Web Service which is secured by HTTP transport level security policy." Thought for the Day "Simple things should be simple and complex things should be possible." — Alan Kay

    Read the article

  • Apache on Mac Mavericks issue [migrated]

    - by Michael
    Trying to run Apache so that I can create a testing server on my mac.When I start apache it starts, but it doesn't run (no connection to local host. Ill upload the unix,you'll see that after starting there is no processes, and I did a check to show you what was running on my port 80... I don't entirely know that means. Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start Michaels-MacBook-Pro-3:~ michaelramos$ ps aux | grep httpd michaelramos 348 0.0 0.0 2442000 624 s000 S+ 8:51AM 0:00.00 grep httpd Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start org.apache.httpd: Already loaded Michaels-MacBook-Pro-3:~ michaelramos$ sudo lsof -i ':80' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ocspd 96 root 18u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 20u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 21u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED) ocspd 96 root 23u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED)

    Read the article

  • Hear about Oracle Supply Chain at Pella, Apr 27-29 '10

    - by [email protected]
    Oracle Customer Showcase - Apr 27-29'10 Featuring Pella Corp. Delivering Greater Customer Value "Discovering the Lean Value Chain" Pella is once again hosting Oracle customers at a mega-reference event in Pella, Iowa, on April 27-29. The agenda features a cross-stack set of topics and issues, including strategies for delivering customer value, improving the customer experience, Value Chain Planning / Manufacturing / Enterprise Performance Management, and Lean practices. Several executives will keynote, including Pella CIO Steve Printz. The event includes a demo grounds, round-table discussion groups, plant tours, and networking opportunities. !  

    Read the article

  • How to install a Logitech c310 webcam?

    - by Teja
    I bought a new Logitech C310 webcam.i dont know how to install webcam on ubuntu.am trying to install this software through the terminal window. wine /media/LWS_2_0/Setup.exe but its showing as fixme:ole:TLB_ReadTypeLib Header type magic 0x00905a4d not supported. err:ole:TLB_ReadTypeLib Loading of typelib L"Z:\\media\\LWS_2_1\\MSetup.exe" failed with error 0 and its opening a window with data as "We have detected you have connected your webcam to a USB1.1 port,for the best performance and full feature set,we suggest using a USB2.0 port" I tried this all the USB ports available in my system.its showing the same error. Please tell me how to install this webcam on ubuntu 11.10. Thanx for reading.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • Oracle Applications Day 2012 -Experience the Global Innovation of Management Applications

    - by antonella.buonagurio
    Il 10 ottobre a Milano e il 17 ottobre a Roma si sono svolti gli Oracle Applications Day, dedicati alla community di Clienti e Partner Oracle. Le due giornate hanno visto la partecipazione di più di 400 persone che hanno condiviso le loro esperienze e conoscenze in ambito applicativo. Durante la sessione plenaria sono state illustrate tutte le novità relative alle Oracle Applications ed in particolare le Oracle Fusion Applications mentre durante le 2 giornate più di 20 clienti hanno parlato di come utilizzano in modo strategico e con successo le soluzioni Oracle. 15 Business Partner grazie all'iniziativa "Partner Instant Workshop" hanno incontrato direttamente i clienti e discusso delle tematiche più calde del momento. Se non hai potuto partecipare all'evento oppure vuoi rivivere quei momenti qui sotto trovi la presentazione della plenaria mentre cliccando su ciascun titolo delle sessioni parallele puoi trovare le rispettive presentazioni. Innovation for Human Resources Performance Management Excellence Empower Applications with Technology (tenutasi solo a Milano) Applications for Public Sector (tenutasi solo a Roma) Next Generation Global Operations Customer Experience Revolution

    Read the article

  • Interfaces Reference Model available

    - by ACShorten
    With the implementation of an Oracle Utilities Application Framework based products, you can implement other Oracle technologies to augment your solution. There is a whitepaper available now to outline all the technology integrations possible with various versions of the Oracle Utilities Application Framework. The whitepaper outlines the possible integrations and implementations of other Oracle technologies to address customer requirements in association with Oracle Utilities Application Framework based products. The whitepaper covers a vast range of products including: Oracle Fusion Middleware Oracle SOA Suite Oracle Identity Management Suite Oracle ExaData and Oracle ExaLogic Oracle VM Data Options including Real Application Clustering, Real Application Testing, Data Guard/Active Data Guard, Compression, Partitioning, Database Vault, Audit Vault etc.. The whitepaper contains a summary of the integration solution possibilities, links to further information including product specific interfaces. The whitepaper is available from My Oracle Support at KB Id: 1506855.1

    Read the article

  • OpenGL-ES: clearing the alpha of the FrameBufferObject

    - by MrDatabase
    This question is a follow-up to Texture artifacts on iPad How does one "clear the alpha of the render texture frameBufferObject"? I've searched around here, StackOverflow and various search engines but no luck. I've tried a few things... for example calling GlClear(GL_COLOR_BUFFER_BIT) at the beginning of my render loop... but it doesn't seem to make a difference. Any help is appreciated since I'm still new to OpenGL. Cheers! p.s. I read on SO and in Apple's documentation that GlClear should always be called at the beginning of the renderLoop. Agree? Disagree? Here's where I read this: http://stackoverflow.com/questions/2538662/how-does-glclear-improve-performance

    Read the article

  • Juju bootstrap fails with "Temporary failure in name resolution" using Amazon AWS

    - by Will
    I have followed the instructions over at https://juju.ubuntu.com/docs/config-aws.html to try and setup myself with a juju environment. It all seemed to setup alright. SSH keys, Gererate config, repository, adding access id and secret keys to environments.yaml file. Although my key file from aws IAM management console was called credentials.csv rather than rootkey, I couldn't find that link described in the documentation. When I give the command juju bootstrap in the testing page. It fails giving me the error: juju bootstrap ERROR Get https://s3-us-west-1.amazonaws.com/juju-gobblygookmynukmbersinhere/provider-state: lookup s3-us-west-1.amazonaws.com: Temporary failure in name resolution (note I just replaced my numbers for this posting my actual terminal has my numbers in. This is my first attempt at any ec2 work so I have gone in and created new IAM profiles. What have I done wrong? Any help would be great. I think I'm in over my head!

    Read the article

  • Ray Tracing Shadows in deferred rendering

    - by Grieverheart
    Recently I have programmed a raytracer for fun and found it beutifully simple how shadows are created compared to a rasterizer. Now, I couldn't help but I think if it would be possible to implement somthing similar for ray tracing of shadows in a deferred renderer. The way I though this could work is after drawing to the gbuffer, in a separate pass and for each pixel to calculate rays to the lights and draw them as lines of unique color together with the geometry (with color 0). The lines will be cut-off if there is occlusion and this fact could be used in a fragment shader to calculate which rays are occluded. I guess there must be something I'm missing, for example I'm not sure how the fragment shader could save the occlusion results for each ray so that they are available for pixel at the ray's origin. Has this method been tried before, is it possible to implement it as I described and if yes what would be the drawbacks in performance of calculating shadows this way?

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >