Search Results

Search found 9952 results on 399 pages for 'more details'.

Page 95/399 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • JavaOne 2011: Content review process and Tips for submissions

    - by arungupta
    The Technical Sessions, Birds of Feather, Panels, and Hands-on labs (basically all the content delivered at JavaOne) forms the backbone of the conference. At this year's JavaOne conference you'll have access to the rock star speakers, the ability to engage with luminaries in the hallways, and have beer (or 2) with community peers in designated areas. Even though the conference is Oct 2-6, 2011, and will be bigger and better than last year's conference, the Call for Paper submission and review/selection evaluation started much earlier.In previous years, I've participated in the review process and this year I was honored to serve as co-lead for the "Enterprise Service Architecture and Cloud" track with Ludovic Champenois. We had a stellar review team with an equal mix of Oracle and external community reviewers. The review process is very overwhelming with the reviewers going through multiple voting iterations on each submission in order to ensure that the selected content is the BEST of the submitted lot. Our ultimate goal was to ensure that the content best represented the track, and most importantly would draw interest and excitement from attendees. As always, the number and quality of submissions were just superb, making for a truly challenging (and rewarding) experience for the reviewers. As co-lead I tried to ensure that I applied a fair and balanced process in the evaluation of content in my track. . Here are some key steps followed by all track leads: Vote on sessions - Each reviewer is required to vote on the sessions on a scale of 1-5 - and also provide a justifying comment. Create buckets - Divide the submissions into different buckets to ensure a fair representation of different topics within a track. This ensures that if a particular bucket got higher votes then the track is not exclusively skewed towards it. Top 7 - The review committee provides a list of the top 7 talks that can be used in the promotional material by the JavaOne team. Generally these talks are easy to identify and a consensus is reached upon them fairly quickly. First cut - Each track is allocated a total number of sessions (including panels), BoFs, and Hands-on labs that can be approved. The track leads then start creating the first cut of the approvals using the casted votes coupled with their prior experience in the subject matter. In our case, Ludo and I have been attending/speaking at JavaOne (and other popular Java-focused conferences) for double digit years. The Grind - The first cut is then refined and refined and refined using multiple selection criteria such as sorting on the bucket, speaker quality, topic popularity, cumulative vote total, and individual vote scale. The sessions that don't make the cut are reviewed again as well to ensure if they need to replace one of the selected one as a potential alternate. I would like to thank the entire Java community for all the submissions and many thanks to the reviewers who spent countless hours reading each abstract, voting on them, and helping us refine the list. I think approximately 3-4 hours cumulative were spent on each submission to reach an evaluation, specifically the border line cases. We gave our recommendations to the JavaOne Program Committee Chairperson (Sharat Chander) and accept/decline notifications should show up in submitter inboxes in the next few weeks. Here are some points to keep in mind when submitting a session to JavaOne next time: JavaOne is a technology-focused conference so any product, marketing or seemingly marketish talk are put at the bottom of the list.Oracle Open World and Oracle Develop are better options for submitting product specific talks. Make your title catchy. Remember the attendees are more likely to read the abstract if they like the title. We try our best to recategorize the talk to a different track if it needs to but please ensure that you are filing in the right track to have all the right eyeballs looking at it. Also, it does not hurt marking an alternate track if your talk meets the criteria. Make sure to coordinate within your team before the submission - multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your "google presence" and/or review committee's prior knowledge of the speaker. The reviewers may not know you or your product at all and you get 750 characters to pitch your idea. Make sure to use all of them, to the last 750th character. Make sure to read your abstract multiple times to ensure that you are giving all the relevant information ? Think through your presentation and see if you are leaving out any important aspects.Also look if the abstract has any redundant information that will not required by the reviewers. There are additional sections that allow you to share information about the speaker and the presentation summary. Use them to blow the horn about yourself and any other relevant details. Please don't say "call me at xxx-xxx-xxxx to find out the details" :-) The review committee enjoyed reviewing the submissions and we certainly hope you'll have a great time attending them. Happy JavaOne!

    Read the article

  • In-Store Innovations with Oracle Retail 14

    - by Marie-Christin Hansen-Oracle
    In this latest video from our demo series filmed at the 2014 NRF BIG Show in New York, Master Principal Consultant Rachel Staniland details innovations in Oracle Retail Stores Solutions. Oracle Retail Stores Solutions provide a brand platform and enable true multichannel retailing. The solution gives retailers improved visibility into store inventory, which both reduces store operating costs and improves the level of customer service offered in-store through store associates. In the below video, Rachel Staniland talks about Oracle Retail’s new tablet POS, coming out the Oracle Retail 14 release, as well innovations made across Store Inventory Management and Point-of-Service solutions. Access more information on Oracle Retail Stores Solutions.

    Read the article

  • Implement 2x speed in tower of defense type game

    - by Siddharth
    I was currently developing tower of defense game and I want to implement 2x feature for my game. Game usually run with 1x speed that was normal speed of the game. Here what 1x and 2x mean : 1x - mention normal speed of the game, 2x - mention the game object moves with double speed means user experience the fast game play. I want to implement such functionality for my game. The functionality that I want contains in the game Medieval Castle game that was available in the market. https://play.google.com/store/apps/details?id=com.nova.root&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5ub3ZhLnJvb3QiXQ.. The screen shot also shows the 1x and 2x button in that game. I think for 2x speed of the game I have to increase the speed of each object that were in the game. So any member please help what to do for that implementation. Only idea become enough for me.

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • Ubuntu apt-get install linux-image

    - by Karl Kloppenborg
    I'd like someone to enlighten me as to what exactly goes on with aptitude when I want a kernel. As we all know, there's pretty much the following kernel option: linux-image-generic linux-image-server linux-image-virtual This morning I did an install and it had linux-image-generic on it, so I ran the following: apt-get -y remove linux-image-* This removed all my kernels as expected, I followed suit with running: apt-get install linux-image-virtual Says I've installed linux-image-server!? Am I missing something here, because I checked twice and it did it twice, however if I manually select a kernel (in my instance I used: linux-image-2.6.35-30-virtual) it will install linux-image-virtual. This seems rather strange to me? Details: Running Ubuntu 9.10 Am I missing something? :)

    Read the article

  • New release of &quot;OLAP PivotTable Extensions&quot;

    - by Luca Zavarella
    For those who are not familiar with this add-in, the OLAP PivotTable Extensions add features of interest to Excel 2007 or 2010 PivotTables pointing to an OLAP cube in Analysis Services. One of these features I like very much, is to know the MDX query code associated with the pivot used at that time in Excel: You can find all the details here: http://olappivottableextend.codeplex.com/ It was recently released a new version of the add-in (version 0.7.4), which does not introduce any new features, but fixes a significant bug: Release 0.7.4 now properly handles languages but introduces no new features. International users who run a different Windows language than their Excel UI language may be receiving an error message when they double click a cell and perform drillthrough which reads: "XML for Analysis parser: The LocaleIdentifier property is not overwritable and cannot be assigned a new value". This error was caused by OLAP PivotTable Extensions in some situations, but release 0.7.4 fixes this problem. Enjoy!

    Read the article

  • Error in mounting HDD

    - by Vikramjeet
    I am getting the following error whenever I mount my external HDD. It was working before and then I opted for safely removing the drive. Now its giving me following error Error mounting: mount exited with exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x43425355 size: 4096 usa_ofs: 8850 usa_count: 65535: Invalid argument Actual VCN (0x800006009000000) of index buffer is different from expected VCN (0x0). Failed to mount '/dev/sdb1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details.

    Read the article

  • A tale of two dev accounts

    - by TechTwaddle
    Note: I am currently in the process of relocating my blog from http://www.geekswithblogs.net/techtwaddle to my new address at http://www.techtwaddle.net I suggest you point your feed readers to the new address as I slowly transition to my new shared-hosted, ad-free wordpress blog :)   You probably remember my rant from a while back about my windows mobile developer account having problems with the new AppHub, well, there have been few developments and I thought I should share it with you. First up, the issue isn’t fixed yet. I still cannot login to AppHub using my windows mobile 6.x developer account and can’t view details of my Minesweeper app. Who knows how many copies its sold. I had numerous exchanges with Microsoft’s support team on the AppHub forums and via email as well (support ticket), but somehow we never managed to get to the root of it. In fact, the support team itself grew so tired of the problem that they suggested I create a new dev account. I grew impatient, and it was really frustrating to have an app ready for submission but not being able to do anything with it. Eventually, the frustration had to show somewhere, and it was on this forum thread Prabhu Kumar in reply to Nick Nick, I feel for you and totally understand the frustration. Since day one I have been getting the XBOX profile linking error, We encountered an issue connecting your App Hub account with your Xbox Live Profile. Please visit Xbox.com and update your contact information. After you have updated your contact information, please return to the App Hub (https://users.create.msdn.com/Register) to continue. I have an app published on the Windows Mobile 6.x marketplace since Aug, now I can't view the details of this app. I completed work on my WP7 application 1.5 months ago and the first version is ready for submission to marketplace, only if I can login. You can imagine how frustrating all this can be, the issue has taken far too long to be fixed, this has drained all my motivation. I have exchanged numerous mails with Microsoft support team on this issue, and from the looks of it they really are trying their best, unfortunately, their best is not good enough for some of us. During the first week of December I was told that there would be an update happening to AppHub around mid of December. I was hoping that the issue would be fixed but it wasn't. After the update the only change I notice is that the xbox.com link on the error page now takes me to the correct link. Previously, this link used to take me to the 404 page you mentioned above. Out of desperation, I am now considering creating another developer account on AppHub with a new live id, even this I am not 100% sure will work. I asked the support team when the next update to AppHub was planned and got this reply, "We do not have  release date to announce for the next App Hub update at this time. In regards to the login issue you are experiencing at this point the only solution would be to create a new account with a different live ID but make sure to go to xbox.com before hand to get all the information in order on that side." I know it's an extra $99, and not that I can't afford it but it doesn't feel right and I shouldn't have to be doing it in the first place. I have lost all hope of this issue being resolved. I went ahead and created a new dev account, the id verification was in progress when Shaun Taulbee of Microsoft, who has been really helpful in the forums, replied saying, If you find it necessary to pay again to create a new account due to a Microsoft problem, send in a support request asking for a refund and we'll review it (and likely approve it given the circumstances). The thought of refund made me happy, but I had my doubts. So once my second account was verified by Geotrust I applied for a refund through the developer dashboard, by creating a support ticket. Couple of days later I got an email from Microsoft saying that the refund had been approved! yay! Few days and the refund showed up on my bill, Well, thank you Microsoft, it means a lot. I am glad it’s over now. The new account works flawlessly. I would still like to get my first account working again and look at my app numbers for Win Mo 6.x, and probably transfer the credits to the new account somehow, but I’ll save it for another day. If you’ve had similar problems with the AppHub, and had to create a new account to submit your app, I suggest you contact the support team and get your dollars refunded!

    Read the article

  • Save the Date for the Oracle Storage Community Forum at Oracle OpenWorld

    - by Ritu Chhibber-Oracle
    Dear Partners, Come and meet Oracle's Top Storage Executives, Architects and Fellow Customers & Partners at the Oracle Storage Community Forum at Oracle OpenWorld on October 1, 2014. This special event will feature interactive sessions on Oracle's Application Engineered Storage strategy, product directions, and real-world customer implementations. Discover the possibilities, as only Oracle can co-engineer hardware with Oracle Database and applications to deliver extreme performance, dynamic automation, management efficiency and cost savings. Storage Forum at Oracle OpenWorld Wednesday, October 1, 20143:30 p.m. - 5:00 p.m. Forum5:00 p.m. - 6:00 p.m. Reception Venue:Metreon – City View135 Fourth Street, Suite 4000,San Francisco, CA 94103 For more details and to register, please click here.

    Read the article

  • Prevent nautilus showing partition mounted in bash script

    - by bcbc
    In my bash script I mount partitions, check them, copy files to them, and unmount. When the script mounts the partition, Nautilus pops up with a Window showing the partition and stealing focus. This is something I want to avoid. Note: I know I can change the behaviour of this in System settings, Details, Removable media, Never prompt or start programs on media insertion, but I don't want to change the behaviour e.g. if a USB stick is plugged in, I just want to prevent it in my bash script. Actually this auto display doesn't seem consistent. If I do the exact same command from the terminal, Nautilus doesn't show, and I know there are other mounts in my script that don't show. So what could be causing this? Here's an example of the code: mkdir -p $target/home mount $target/home $homedev Thanks in advance

    Read the article

  • Craft a Drinkable Density Column

    - by Jason Fitzpatrick
    Earlier this month we shared a clever 9-layer density column demonstration you’d most certainly not want to drink. This smaller demonstration, however, is a delicious column of fruit flavors. The secret sauce? In the previous experiment we shared the secret was using fluids with naturally varying densities (such as lamp oil and vegetable oil); in this experiment you’ll be relying on varying amounts of sugar in each layer to change the density of the water and keep them separate (and edible). You’ll need some Skittles, a few drinking glasses, water, and for best effect, a tall and narrow glass or graduated cylinder. Hit up the link below for the full details on the experiment and tips on how to carefully layer the liquids. Make a Drinkable Rainbow in a Glass [i09] Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Need a Holistic view of your Concurrent Processing?

    - by cwarticki
    Need a Holistic view of your Concurrent Processing? Choose CP AnalyzerGo to Doc 1411723.1 for more details and script download. The Concurrent Processing Analyzer is a Self-Service Health-Check script which reviews the overall Concurrent Processing Footprint, analyzes the current configurations and settings for the environment providing feedback and recommendations on Best Practices. This is a non-invasive script which provides recommended actions to be performed on the instance it was run on.  For production instances, always apply any changes to a recent clone to ensure an expected outcome. E-Business Applications Concurrent Processing Analyzer Overview E-Business Applications Concurrent Request Analysis E-Business Applications Concurrent Manager Analysis Identifies Concurrent System Setup and configurations Identifies and recommends Concurrent Best Practices Easy to add Tool for regular Concurrent Maintenance Execute Analysis anytime to compare trending from past outputs Feedback welcome!

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Elmah

    - by csharp-source.net
    LMAH (Error Logging Modules and Handlers) is an application-wide error logging facility that is completely pluggable. It can be dynamically added to a running ASP.NET web application, or even all ASP.NET web applications on a machine, without any need for re-compilation or re-deployment. Once ELMAH has been dropped into a running web application and configured appropriately, you get the following facilities without changing a single line of your code: * Logging of nearly all unhandled exceptions. * A web page to remotely view the entire log of recoded exceptions. * A web page to remotely view the full details of any one logged exception. * In many cases, you can review the original yellow screen of death that ASP.NET generated for a given exception, even with customErrors mode turned off. * An e-mail notification of each error at the time it occurs. * An RSS feed of the last 15 errors from the log.

    Read the article

  • Visual Studio 2010 launched!!!

    - by mcp111
    Yesterday Microsoft launched Visual Studio at the Visual Studio Expo in Las Vegas. Visual Studio 2010 has several great productivity enhancements for developers. Watch the keynote and see how Visual Studio 2010 help developers "stay in the zone"!!! http://www.microsoft.com/visualstudio/en-us/watch-it-live Then dive into all the cool new features yourself with the free Visual Studio 2010 Training kit. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=752cb725-969b-4732-a383-ed5740f02e93 Happy programming!!!  

    Read the article

  • Web developer has become uncooperative, what should we do to rescue our site? [closed]

    - by TOM
    What can an individual or a company do if the web designer who has designed their website becomes completely uncooperative? In our case He refuses to meet with us for discussions. He refuses to give us training on the effective use and management of the website he has designed for us (and has been paid in full for) despite this training being part of the original contract. Last year he disappeared for a number of months ,refused to answer emails or phobe calls and was totally unavailable to help us He refuses to give us the details of the hosting of our website. We have totally lost faith in this arrogant and unreasonable guy and would like to break off all relationships with him but ,it appears, he's got us over a barell

    Read the article

  • ADF Sessions at RMOUG this week

    - by shay.shmeltzer
    If you are attending the RMOUG conference this week, you might be interested in checking out some of the sessions we are doing about Oracle ADF:Lynn is delivering:The Fusion Development Platform - Wed at 9:00 (404)Put Your Good Taste Into Action: How to Skin ADF Faces Rich Client Applications - Wed 5:00 (4 c/d)Shay is delivering:From SQL to Rich Web Data Visualization - The Fast Route - Thu at 9:00 (404)Adding Mobile and Web 2.0 UIs to Existing Applications - The Fusion Way - Thu at 10:15 (404)There are also lots of ADF related sessions delivered by customers and partners including:Drinking the Kool-Aid - My Journey to Becoming an ADF BelieverCase Study: Performance Tuning New ADF Applications Using Oracle Application Testing Suite (ATS)Oracle ADF & JDeveloper: Coming of AgeHello Worldwide Web: Your First JSF in JDeveloperMore details see the schedule here.If you are using ADF already, please drop by and let us know what you think. We are always looking for user feedback.

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • "44 Tips" in PHP Magazin and Other NetBeans IDE Screencasts

    - by Geertjan
    My recent YouTube series "44 Tips for Front End Web Devs" (part 1, part 2) has been picked up by PHP Magazin: http://phpmagazin.de/news/Frontend-Entwicklung-mit-NetBeans-IDE-168339 Great. I'm working on more screencasts like that, from different angles. For example, one will methodically explain each and every window in NetBeans IDE; another will step through the creation of an application from conception to deployment; while another will focus on the NetBeans IDE extension points and how easily they can be used to add new features to NetBeans IDE. The screencast approach has, I think, a lot of advantages. They take less time to make and they seem to be more effective, in several ways, than tutorials. Hearing someone talk through a scenario seems to also put things in a clearer perspective than when you have everything written out in a document, where small details get lost and diversions are more difficult to make. Anyway, onwards to more screencasts. Any special requests?

    Read the article

  • Upgrading Agent Controllers in Oracle Enterprise Manager Ops Center 12c

    - by S Stelting
    Oracle Enterprise Manager Ops Center 12c recently released an upgrade for Solaris Agent Controllers. In this week's blog post, we'll show you how to upgrade agent controllers. Detailed instructions about upgrading Agent Controllers are available in the product documentation here. This blog post uses an Enterprise Controller which is configured for connected mode operation. If you'd like to apply the agent update in a disconnected installation, additional instructions are available here. Step 1: Download Agent Controller Updates With a connected mode Ops Center installation, you can check for product updates at any time by selecting the Enterprise Controller from the left-hand Administration navigation tab. Select the right-hand Action link “Ops Center Downloads” to open a pop-up dialog displaying any new product updates. In this example, the Enterprise Controller has already been upgraded to the latest version (Update 1, also shown as build version 2076) so only the Agent Controller updates will appear. There are three updates available: one for Solaris 10 X86, one for Solaris 8-10 SPARC, and one for all versions of Solaris 11. Note that the last update in the screen shot is the Solaris 11 update; for details on any of the downloads, place your mouse over the information icon under the details column for a pop-up text region. Select the software to download and click the Next button to display the Ops Center license agreement. Review and click the check box to accept the license agreement, then click the Next button to begin downloading the software. The status screen shows the current download status. If desired, you can perform the downloads as a background job. Simply click the check box, then click the next button to proceed to the summary screen. The summary screen shows the updates to be downloaded as well as the current status. Clicking the Finish button will close the dialog and return to the Browser UI. The download job will continue to run in Ops Center and progress can still be viewed from the jobs menu at the bottom of the browser window. Step 2: Check the Version of Existing Agent Controllers After the download job completes, you can check the availability of agent updates as well as the current versions of your Agent Controllers from the left-hand Assets navigation tab. Select “Operating Systems” from the pull-down tab lets to display only OS assets. Next, select “Solaris” in the left-hand tab to display the Solaris assets. Finally, select the Summary tab in the center display panel to show which versions of agent controllers are installed in your data center. Notice that a few of the OS assets are not displayed in the Agent Controllers tab. Ops Center will not display OS instances which do not have an Agent Controller installation. This includes Enterprise Controllers and Proxy Controllers (unless the agent has been activated on the OS instance) and and OS instances using agentless management. For Agent Controllers which support an update, the version of agent software (in this example, 2083) appears to the right of the currently installed version. Step 3: Upgrade Your Agent Controllers If desired, you can upgrade agent controllers from the previous screen by selecting the desired systems and clicking the upgrade button. Alternatively, you can click the link “Upgrade All Agent Controllers” in the right-hand Actions menu: In either case, a pop-up dialog lets you start the upgrade process. The first screen in the dialog lets you choose the upgrade method: Ops Center provides three ways to upgrade agent controllers: Automatic Upgrade: If Agent Controllers are running on all assets, Ops Center can automatically upgrade the software to the latest version without requiring any login credentials to the system SSH using a single set of credentials: If all assets use the same login credentials, you can apply a single set to all assets for the upgrade process. The log-in credentials are the same ones used for asset discovery and management, which are stored in the Plan Management navigation tab under Credentials. SSH using individual credentials: If assets use different login credentials, you can select a different set for each asset. After selecting the upgrade method, click the Next button to proceed to the summary screen. Click the Finish button to close the pop-up dialog and start the upgrade job for the agent controllers. The upgrade job runs a series of tasks in parallel, and will upgrade all agents which have been selected. Once the job completes, the OS instances in your data center will be upgraded and running the latest version of Agent Controller software.

    Read the article

  • How do I approach large companies if I have a killer mobile game idea?

    - by Balázs Dávid
    I have an idea for a game that has potential, but I'm not a programmer. How do I tell this to development companies without having my idea stolen? All I want from the company is for somebody to watch a three minute long video presentation about my idea and if they see potential in it then we can talk about the details. I have already sent an e-mail to several big companies that have the expertise needed to code the game, they haven't answered me. Actually the idea is nothing fancy, no 3D, but fun and unique.

    Read the article

  • Hold The Date: GlassFish Community Event and Party @ JavaOne 2012 - Sep 30

    - by arungupta
    A yearly tradition for the past 5 years (2007, 2008, 2009, 2010, 2011) is back again this year ... GlassFish Community Event is a gathering of GlassFish community members attending JavaOne. GlassFish Party is for everybody who are, or would like to be friends of GlassFish, in and around the San Francisco Bay Area. This year, again, both the events will be happening on the Sunday of JavaOne. The exact coordinates are a TBD but save the date while you are booking flights/hotels. GlassFish Community Event When: Sep 30, 11am - 1pm Where: TBD GlassFish Party When: Sep 30, 8pm - 10pm Where: The Thirsty Bear Note, a separate JavaOne registration is required to attend the community event. The party is open to everybody, and no JavaOne registration is required. RSVP details are still being worked upon and will be shared soon. 10 reasons to attend these events allows you to build your case with the management :-)

    Read the article

  • Looking for a simple to use email server that can be programmatically (preferably remotely) used

    - by sr2222
    I've been poking around the internet for much of the day, but I can't seem to find a good server to fit my needs. What I need is a simple to use and deploy (pref open source) lightweight email server that I can create users on programmaticly that has IMAP or POP support. I'd prefer something with an existing service interface, but if I have to write a REST API on top of an easy to use API, that's acceptable. The purpose of this tool will be to allow a test automation framework to create new email accounts and retrieve email sent to those addresses. I need text, html, and possibly attachment support as well. Perhaps it's my noobishness, but I can't really suss out the details from the documentation on the servers available out there to figure out which fit my needs.

    Read the article

  • StorageTek SL8500 Release 8.3 available

    - by uwes
    Boosting Performance and Enhancing Reliability with StorageTek SL8500 Release 8.3! We’re pleased to announce the availability of SL8500 8.3 firmware, which supports partitioning for library complexes, library media validation, drive tray serial number reporting, and StorageTek T10000D tape drives! StorageTek SL8500 8.3 support the following: Library Complex Partitioning: Provides support for partitioning across an SL8500 library complex  Supports up to 16 partitions per library complex   Library Media Validation: Utilizing StorageTek Library Console, users can initiate media verifications with our StorageTek T10000C/D tape drives on StorageTek T10000 T1 and T2 media  Supports 3 scan options: basic verify, standard verify and complete verify Please read the Sales Bulletin (Firmware reales 8.31) on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Tape Page Oracle Technology Network Tape Page

    Read the article

  • Speaking in St. Louis on June 14th

    - by Bill Graziano
    I’m going back to speak in St. Louis next month.  I didn’t make it last year and I’m looking forward to it.  You can find additional details on the St. Louis SQL Server user group web site.  The meeting will be held at the Microsoft office and I’ll be speaking at 1PM. I’ll be speaking on the procedure cache.  As people get better and better tuning queries this is the next major piece to understand.  We’ll talk about how and when query plans are reused.  The most common issue I see around odd query plans are stored procedures that use one query plan but the queries run completely different when you extract the SQL and hard code the parameters.  That’s just one of the common issues that I’ll address. There will be a second speaker after I’m done, then a short vendor presentation and a drawing for a netbook.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >