Search Results

Search found 26774 results on 1071 pages for 'distributed development'.

Page 669/1071 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • TestRail 1.0 Test Management Software released

    Gurock Software just released its new test management solution TestRail. TestRail is a web-based test case management software that helps software development teams and QA departments to efficiently manage, track and organize software testing efforts.

    Read the article

  • How can I avoid team burnout?

    - by Shawn Dalma
    I work for a small web company that deals with a lot of projects, a few at any given time are development heavy for us (400-1500 hours or more) and I've been noticing developers get extremely burnt out on a project after 150 hours or so. I've been toying around with the idea of working some form of rotation/rest so when someone reaches the threshold, they at least get some time off of working on that project. Is there an industry standard approach?

    Read the article

  • htaccess redirect problem

    - by jimbo
    Hi all, I am currently building a site and want to hide all development work on the site. I am using a htaccess file and redirecting to my holding page index.php?id=7: Options +FollowSymlinks RewriteCond %{REQUEST_URI} !/index.php RewriteCond %{REQUEST_URI} !/assets/ RewriteRule $ /index.php?id=7 [R=307,L] This is working for pretty much all pages, but, changing index.php?id=7 to another number id=6 for example still shows the page with no redirect. Any help welcome...

    Read the article

  • Speaking at Mix11

    - by Dennis Vroegop
    In April Microsoft will hold the next MIX event. MIX was usually targeted at web designers and developers but has grown over the years to be more a general conference focused on the web and devices. In other words: everything the normal consumer might encounter. It’s not your typical developers conference, although you’ll find many developers there as well. But next to the developers you’ll probably run into designers and user experience specialists as well. This year I am proud to say that I will be one of the people presenting there. Together with all the Surface MVP’s in the world (sounds impressive, but there are only 7 of us) we’ll host a panel discussion on all things Surface, NUI and everything else that matches those subjects. Here’s what the abstract says: The Natural User Interface (NUI) is a hot topic that generates a lot of excitement, but there are only a handful of companies doing real innovation with NUIs and most of the practical experience in the NUI style of design and development is limited to a small number of experts. The Microsoft Surface MVPs are a subset of these experts that have extensive real-world experience with Microsoft Surface and other NUI devices. This session is a panel featuring the Microsoft Surface MVPs and an unfiltered discussion with each other and the audience about the state of the art in NUI design and development. We will share our experiences and ideas, discuss what we think NUI will look like in the near future, and back up our statements with cutting-edge demonstrations prepared by the panelists involving combinations of Microsoft Surface 2.0, Kinect, and Windows Phone 7. We, as Surface MVPs think we are more than just Surface oriented. We like to think we are more NUI MVP’s. But since that’s not a technology with Microsoft you can’t actually become a NUI MVP so Surface is the one that comes the closest. We are currently working on the details of our session but believe me: it will blow you away. Several people we talked to have said this could potentially be the best session of Mix. Quite a challenge, but we’re up for it! Of course I won’t be telling you exactly what we’re going to do in Las Vegas but rest assured that when you visit our session you’ll leave with a lot of new ideas and hopefully be inspired to bring into practice what you’ve seen. Even if the technology we’ll show you isn’t readily available yet. So, if you are in Las Vegas between April 12th and 14th, please join Joshua Blake, Neil Roodyn, Rick Barraza, Bart Roozendaal, Josh Santangelo, Nicolas Calvi and myself for some NUI fun! See you in Vegas! Tags van Technorati: mix11,las vegas,surface,nui,kinecct

    Read the article

  • 10 Do';s and Don';ts to Avoid SEO Mistakes

    With so much misinformation out there, along with a lack of knowledge about how SEO works, you could end up getting your website banned from the search engines. Learn how to avoid common mistakes wit... [Author: Debbie Everson - Web Design and Development - April 02, 2010]

    Read the article

  • No C++ option after install Eclipse CDT

    - by Muhammad Khan
    I used terminal to install eclipse with jdk7, and now I want incorporate c/c++ development, so I installed a compiler (gcc 4.7) and and eclipse cdt plugin from the terminal: sudo apt-get install eclipse-cdt But when I restarted eclipse and tried to change the perspective, there was no c++ option I cannot even create a new c++ project. Someone suggested that I do "Install New Software" and choose the cdt from the hard drive. If this is what I should do, where does terminal install its files to?

    Read the article

  • Oracle Linux Friday Spotlight - October 18, 2013

    - by Chris Kawalek
    Happy Friday! Echoing our popular series over on the Oracle Virtualization blog, we'll now be spotlighting something interesting about Oracle Linux for you every Friday. This week, we have a really cool video done by Intel that features Oracle's Phillip Goerl discussing the Oracle Linux development model and how it relates to Intel Xeon. Click below to jump to YouTube and play the video: See you next week! -Chris 

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

  • MySQL Exotic Storage Engines

    MySQL has an interesting architecture that allows you to plug in different modules to handle storage. What that means is that it's quite flexible, offering an interesting array of different storage engines with different features, strengths, and tradeoffs. Sean Hull presents some of the newest and more exotic storage engines, and even some that are still in development.

    Read the article

  • QAliber

    - by csharp-source.net
    QAliber includes 2 projects: a Visual Studio plug-in and Test Builder + Runner as execute framework. Visual Studio plug-in help writing automatic tests over GUI with control browser and record/play capabilities (but not only, since this project incorporate into development solution API testing is easy to do) The Test Builder is a framework for creating a scenario by simply drag and drop of created building blocks. It already provide big repository of test blocks performing most tasks without coding.

    Read the article

  • Accenture Foundation Platform for Oracle

    - by Michelle Kimihira
    Accenture Foundation Platform for Oracle (AFPO) helps clients accelerate deployments on Oracle Fusion Middleware. Version 5 is the latest version and added exciting new capabilities including: integration with Oracle Application Development Framework (ADF) Mobile and Oracle WebCenter. For more information, visit Accenture's Website. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Configure Jenkins and Tomcat using Puppet on Vagrant

    - by ex3v
    I'm playing with setting up my first Spring + jenkins + Tomcat CI dev environment. For now it's just a test/fun phase, but in the near future I'll be starting new project with my coworkers. That's the reason that I want development environment virtualized and exactly te same on every development machine, as well as on production server. I choosen to use Vagrant and to try to write puppet scripts that not only install everything, but also configure everything so each of us will have the same jenkins plugins, same jenkins and tomcat login and password, and literally after calling vagrant up we are ready to work. What I managed to do so far is installation of stuff needed and port forwarding. My vagrantfile looks like this (comments stripped): VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :forwarded_port, guest: 80, host: 8090 config.vm.network :forwarded_port, guest: 8080, host: 8091 config.vm.network :private_network, ip: "192.168.33.10" config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.options = ['--verbose'] end end And this is my puppet file: Exec { path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ] } class system-update { exec { 'apt-get update': command => 'apt-get update', } $sysPackages = [ "build-essential" ] package { $sysPackages: ensure => "installed", require => Exec['apt-get update'], } } class tomcat { package { "tomcat": ensure => present, require => Class["system-update"], } service { "tomcat": ensure => "running", require => Package["tomcat"], } } class jenkins { package { "jenkins": ensure => present, require => Class["system-update"], } service { "jenkins": ensure => "running", require => Package["jenkins"], } } include system-update include tomcat include jenkins Now, when I hit vagrant provision and go to http://localhost:8091/ I can see jenkins running, so above script works good. Next step is configurating jenkins and tomcat by extending above puppet scripts. I'm pretty green when it comes to CI. After wandering around web I've found few tutorials about jenkins configuration (here's one of them). I really want to move configuration presented in this tutorial to puppet file, so when I spread my vagrantfile and puppet file between my coworkers, I will be sure that everyone has exactly te same setup. Unfortunately I'm also green about using puppet, I don't know how to do this. Any help will be apreciated.

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • Moving from building internal applications as WPF to ASP NET MVC?

    - by stuartmclark
    I have worked on quite a few internal applications for my work and I have always defaulted to using WPF, but I am considering re-writing existing ones into a web app. This is so that anyone in my company can use it without having to download anything from the network. I am just wondering if this is the way forward with any development of new internal applications? So, should I stop using WPF and start using ASP.NET MVC for internal applications that a lot of people need to use?

    Read the article

  • 5 Tips to Select the Right Domain Name

    Domain name is nothing but the name of the web page which the users have to type in the address bar of a browser to reach to the web page. In simple words it is a name to the web page. People are now... [Author: Praveen Sivaraman - Web Design and Development - March 31, 2010]

    Read the article

  • WinMo&rsquo;s Demise: Notifying Next of &ldquo;Kin&rdquo;

    - by andrewbrust
    This past Monday, April 12th, Visual Studio 2010 was launched.  And on that same day, Microsoft also launched a new line of  mobile phone handsets, called Kin.  The two product launches are actually connected, but only by what they do not have in common, and what they commonly lack. On the former point: VS 2010 had released to manufacturing a couple weeks prior to its launch.  The Kin phones, meanwhile are not yet available.  We don’t even know what they will cost.  (And I think cost will be a major factor in Kin’s success…I told ChannelWeb’s Yara Souza so in this article). What do the two products both lack? Simple: Windows Mobile 6.x. For example, Kin seems to be based on the same platform as Windows Phone 7 (albeit a subset).  And VS 2010 does not support .NET Compact Framework development, which means no .NET development support for WinMo 6.x and earlier. So I guess April 12th marks Windows Phone “clean slate day.”  If you want to develop for the old phone platform, you will need to use the old version of Visual Studio (i.e. 2008).  Luckily VS 2010 and 2008 can be installed side-by-side.  But I doubt that’s much consolation to developers who still target WinMo 6.5 and earlier. Remember, WinMo isn’t just about the phone.  There are all sorts of non-telephony mobile devices, including ruggedized Pocket PC-style instruments, bar code readers and shop-floor-deployed units that don’t run Windows Phone 7 and couldn’t, even if they wanted to. Where will developers in these markets go?  I would guess some will stick with WinMo 6.x and earlier, until Windows Phone 7 can handle their workloads, assuming that does indeed happen.  Others will likely go to Google’s Android platform. For OEMs and developers who need a customizable mobile software stack, Android is turning out to be out-WinMo-ing WinMo.  As I wrote in this post, Google took Microsoft’s model (minus the licensing fees) and combined it with a modern SmartPhone feature set (rather than a late 90s/early oughts PDA paradigm), to great success.  You might say Google embraced and extended. You might also say Microsoft shunned and withdrew.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Creative, busy Devoxx week

    - by JavaCecilia
    I got back from my first visit to the developer conference Devoxx in Antwerp. I can't describe the vibes of the conference, it was a developer amusement park, hackergartens, fact sessions, comic relief provided by Java Posse, James Bond and endless hallway discussions.All and all - I had a lot of fun, my main mission was to talk about Oracle's main focus for OpenJDK which besides development and bug fixing is making sure the infrastructure is working out for the full community. My focus was not to hang out at night club the Noxx, but that was came included in the package :)The London Java community leaders Ben Evans and Martijn Verburg are leading discussions in the community to lay out the necessary requirements for the infrastructure for build and test in the open. They called a first meeting at JavaOne gathering 25 people, including people from RedHat, IBM and Oracle. The second meeting at Devoxx included 14 participants and had representatives from Oracle and IBM. I hope we really can find a way to collaborate on this, making sure we deliver an efficient infrastructure for all engineers to contribute to OpenJDK with.My home in all of this was the BOF rooms and the sessions there meeting the JUG leaders, talking about OpenJDK infrastructure and celebrating the Duchess Duke Award together with the others. The restaurants in the area was slower than I've ever seen, so I missed out on Trisha Gee's brilliant replay of the workshop "The Problem with Women in IT - an Agile Approach" where she masterly leads the audience (a packed room, 50-50 gender distribution) to solve the problem of including more diversity in the developer community. A tough and sometimes sensitive topic where she manages to keep the discussion objective with a focus of improving the matter from a business perspective. Mattias Karlsson is organizing the Java developer conference Jfokus in Stockholm and was there talking to Andres Almires planning a Hackergarten with a possible inclusion of an OpenJDK bugathon. That would be really cool, especially as the Oracle Stockholm Java development office is just across the water from the Jfokus venue, some of the local JVM engineers will likely attend and assist, even though the bug smashing theme will likely be more starter level build warnings in Swing or langtools than fixing JVM bugs.I was really happy that I managed to catch a seat for the Java Posse live podcast "the Third Presidential Debate" a lot of nerd humor, a lot of beer, a lot of fun :) The new member Chet had a perfect dead pan delivery and now I just have to listen more to the podcasts! Can't get the most perfect joke out of my head, talking about beer "As my father always said: Better a bottle in front of me than a frontal lobotomy" - hilarious :)I attended the sessions delivered by my Stockholm office colleagues Marcus Lagergren (on dynamic languages on the jvm, JavaScript in particular) and Joel Borggrén-Franck (Annotations) and was happy to see the packed room and all the questions raised at the end.There's loads of stuff to write about the event, but just have to pace myself for now. It was a fantastic event, captain Stephan Janssen with crew should be really proud to provide this forum to the developer community!

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >