Search Results

Search found 20429 results on 818 pages for 'software restriction'.

Page 440/818 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • Builder Pattern: When to fail?

    - by skiwi
    When implementing the Builder Pattern, I often find myself confused with when to let building fail and I even manage to take different stands on the matter every few days. First some explanation: With failing early I mean that building an object should fail as soon as an invalid parameter is passed in. So inside the SomeObjectBuilder. With failing late I mean that building an object only can fail on the build() call that implicitely calls a constructor of the object to be built. Then some arguments: In favor of failing late: A builder class should be no more than a class that simply holds values. Moreover, it leads to less code duplication. In favor of failing early: A general approach in software programming is that you want to detect issues as early as possible and therefore the most logical place to check would be in the builder class' constructor, 'setters' and ultimately in the build method. What is the general concensus about this?

    Read the article

  • No Internet On 12.04.1

    - by David Ramos
    I have no internet on Ubuntu 12.04.1 so i'm on a desktop so I got to install a .exe program to run my wireless card for my desktop so I got internet access, But I can't seem to download wine or its source files. If I do I put on my usb and boot into Ubuntu but then when I try to open it I get the programs source files etc. Anyone who could help me Please do :| ( I can't install wine from software center since I have no connection online). Note : I can download my stuff on windows 8 boot so I can transfer files to my usb to run on Ubuntu.

    Read the article

  • Adventures in Scrum: Lesson 1 &ndash; The failed Sprint

    - by Martin Hinshelwood
    I recently had a conversation with a product owner that wanted to have the Scrum team broken up into smaller units so that less time was wasted on the Scrum Ceremonies! Their complaint was around the need in Scrum to have the entire “Team” (7+-2) involved in the sizing of the work during the “Sprint Planning Meeting”.  The standard flippant answer of all Scrum professionals, “Well that's not Scrum”, does not get you any brownie points in these situations. The response could be “Well we are not doing Scrum then” which in turn leads to “We are doing Scrum…But, we have split the scrum team into units of 2/3 so that they can concentrate on a specific area of work”. While this may work, it is not Scrum and should not be called so… It is just a form of Agile. Don’t get me wrong at this stage, there is nothing wrong with Agile, just don’t call it Scrum. The reason that the Product Owner wants to do this is that, in effect, through a number of miscommunications and failings in our implementation of Scrum, there was NO unit of potentially Shippable software at the end of the first sprint. It does not matter to them that most Scrum teams will fail the first Sprint, even those that are high performing teams. Remember it is the product owners their money! We should NOT break up scrum teams into smaller units for the purpose of having less people tied up in the Scrum Ceremonies. The amount of backlog the Team selects is solely up to the Team… Only the Team can assess what it can accomplish over the upcoming Sprint. - Scrum Guide, Scrum.org The entire team must accept the work and in order to understand what they can accept they must be free to size it as a team. This both encourages common understanding and increases visibility on why team members think a task is of a particular size. This has the benefit of increasing the knowledge of the entire team in the problem domain. A new Team often first realizes that it will either sink or swim as a Team, not individually, in this meeting. The Team realizes that it must rely on itself. As it realizes this, it starts to self-organize to take on the characteristics and behaviour of a real Team. - Scrum Guide, Scrum.org This paragraph goes to the why of having the whole team at the meeting; The goal of Scrum it to produce a unit of potentially shippable software at the end of every Sprint. In order to achieve this we need high performing teams and this is what Scrum as a framework has been optimised to produce. I think that our Product Owner is understandably upset over loosing two weeks work and is losing sight the end goal of Scrum in the failures of the moment. As the man spending the money, I completely understand his perspective and I think that we should not have started Scrum on an internal project, but selected a customer  that is open to the ideas and complications of Scrum. So, what should we have NOT done on our first Scrum project: Should not have had 3 interns as the only on site resource – This lead to bad practices as the experienced guys were not there helping and correcting as they usually would. Should not have had the only experienced guys offsite – With both the experienced technical guys in completely different time zones it was difficult to get time for questions. Helping the guys on site was just plain impossible. Should not have used a part time ScrumMaster – Although the ScrumMaster attended all of the Ceremonies, because they are only in 2 full days of the week it makes it difficult for the team to raise impediments as they go. Should not have used a proxy product owner. – This was probably the worst decision that was made. Mainly because the proxy product owner did not have the same vision as the product owner. While Scrum does not explicitly reject the idea of a Proxy Product Owner, I do not think it works very well in practice. The “single wringable neck” needs to contain both the Money and the Vision as well as attending the required meetings. I will be brining all of these things up at the Sprint Retrospective and we will learn from our mistakes and move on. Do, Inspect then Adapt…   Technorati Tags: Scrum,Sprint Planing,Sprint Retrospective,Scrum.org,Scrum Guide,Scrum Ceremonies,Scrummaster,Product Owner Need Help? Professional Scrum Developer Training SSW has six Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools.

    Read the article

  • How can a student programmer improve his teamwork skill?

    - by xiao
    I am a student right now. Recently, I am working in a project as a leader with three other students. Due to the lack of experience, our project is progressing slowly and our members are frustrated. They do not feel sense of accomplishment in the project. I am pressured and frustrated, too. But as a team leader, I think I need to push them. But I do not know how to do. Do I help them solve coding problem or just encouragement? But if I pay too much attention on it, it would slow down my own progress. It is a not technical question, but it is very common in software development. I hope veteran programmers would give me some suggestions. Thanks!

    Read the article

  • What defines good developer culture? [closed]

    - by Sven
    We are a team of 6 people developing applications for mobile devices (Android & iOS). In our company, which consists of many teams responsible for "classic" software development, business intelligence, virtualization, hardware, etc., we are kind of a small startup because we were the first to use agile methods like Scrum and we are open to new technologies and methods. Also our team is pretty young with me being the oldest with 30 years. We would like to further raise productivity and motivation and thus are currently collecting points which make up a good developer/hacker culture and which may be improved in our team/company. This can be points that we can either improve ourselves or have to pass on to management. I would like to know what in your opinion defines good, modern developer culture? What does developer culture consists of? For example is it clearly defined career opportunities geeky office benefits like trips to extraordinary conferences like WWDC or Google I/O ...

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • How can I cap cloud-costs?

    - by Joe Simpson
    I am looking into launching a cloud-based product for consumers where with a prepaid account they can start a server with a simple click and load up the software and access it remotely. The technical side of that I can manage, but I am worried about the costs escalating ridiculously high for both me and my customers Is there a way I can Limit how much each server can cost me before it will be deactivated See how much a server is currently costing me (so I can deduct it from their account) with it being extremely reliable as I don't want to have to have a giant bill in any possibility.

    Read the article

  • Detail of acceptance criteria in user story

    - by Kai Kramhoeft
    I have the following example for a user story with acceptance criteria. I would like to know if I am allowed to describe how the GUI must be changed to support the new feature. How much detail can acceptance criteria have? This is my example: User Story: As forum administrator I will connect persons in groups, so that people can get organized. Acceptance Criteria: The creation of a person group happens below a person group pool (person group pool is an object also visually available in the current software system) The creation happens with a context menu of the persongroup pool. Below the pool one can create new groups. A person group contains: person group-ID, description, remark May that be relevant an right acceptance criteria? Because I describe how you can create a new group by opening a context menu.

    Read the article

  • Video recording in cheese is slow

    - by Gaurav Butola
    cheese records at painful fps. video recording is really slow in cheese almost unusable. How can I increase the fps for cheese. I have HCL laptop with built in 1.3 MP camera 2.47 GHz i3 processor with 2 GB RAM. running maverick 32 bit. I installed camorama from software centre which has fine video (fps), but I cant seem to use it due to some error. So I doubt there is something to be tweaked with cheese itself.

    Read the article

  • Understanding hand written lexers

    - by Cole Johnson
    I am going to make a compiler for C (C99; I own the standards PDF), written in C (go figure) and looking up on how compilers work on Wikipedia has told me a lot. However, after reading up on lexers has confused me. The Wikipedia page states that: the GNU Compiler Collection (gcc) uses hand-written lexers I have tried googling what a hand written lexer and have come up with nothing except for "making a flowchart that describes how it should function", however, isn't that how all software development should be done? So my question is: "What is a hand written lexer?"

    Read the article

  • Algorithms and Programmer's day-to-day job

    - by Lior Kogan
    As of July 10, 2012, Stack Overflow contains 3,345,864 questions, out of which 20,840 questions are tagged as "Algorithm" - this is less than 0.6% ! I find it disturbing. Many programmers have several years of academic education in computer science / software engineering. Most of them are smart... When asked, most would say that they love algorithms. Computer programming is generally about solving problems using algorithms... Yet, only 1 of 160 questions is tagged as algorithm related. What does it say about our profession?

    Read the article

  • problems updating system with apt-get

    - by Javier
    After to do apt-get update and try with apt-get upgrade I have the next error message: This is a coppy of my terminal (in spanish) root@LinuxJGP:/home/javiergp# apt-get upgrade Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho Se actualizarán los siguientes paquetes: apport apport-symptoms fonts-liberation gnome-icon-theme gnome-orca language-pack-en language-pack-en-base language-pack-es language-pack-es-base language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-es language-pack-gnome-es-base light-themes linux-firmware oneconf resolvconf sessioninstaller software-center ssl-cert tzdata ubuntu-docs ubuntu-keyring ubuntu-sso-client ubuntuone-control-panel ubuntuone-installer unity-lens-video unity-scope-video-remote xdiagnose 29 actualizados, 0 se instalarán, 0 para eliminar y 0 no actualizados. E: Los archivos de índice de paquetes están dañados. No existe un campo «Filename:» para el paquete ubuntu-keyring. How can resolve this problem?

    Read the article

  • How to install a Logitech c310 webcam?

    - by Teja
    I bought a new Logitech C310 webcam.i dont know how to install webcam on ubuntu.am trying to install this software through the terminal window. wine /media/LWS_2_0/Setup.exe but its showing as fixme:ole:TLB_ReadTypeLib Header type magic 0x00905a4d not supported. err:ole:TLB_ReadTypeLib Loading of typelib L"Z:\\media\\LWS_2_1\\MSetup.exe" failed with error 0 and its opening a window with data as "We have detected you have connected your webcam to a USB1.1 port,for the best performance and full feature set,we suggest using a USB2.0 port" I tried this all the USB ports available in my system.its showing the same error. Please tell me how to install this webcam on ubuntu 11.10. Thanx for reading.

    Read the article

  • Compiz stop working

    - by Aikanáro
    I'm on a laptop sony vaio, vng-nw330f with ubuntu 11.10. One day I used my computer normally, I shutdown and the other day when I turn on it again, all was different. Compiz effects is not working at all, it doesn't matter what plugin (from ccsm) i enable or disable, nothings happens, nothings changes. I tried next commands: unity --reset unity --reset-icons sudo rm -rf .config .gnome .gnome2 .gnome2_private .compiz .icons .fonts .nautilus .themes .qt .local (from my home directory) gconftool-2 --recursive-unset /apps/compiz-1 I try reinstalling compiz (I remove compiz from software center and installed it again). I have just a couple of weeks using ubuntu, I'm not sure if exist a desktop call it unity 3d, but I think that it's missing. I want the graphics interface of ubuntu 11.10 by default.

    Read the article

  • cannot format a fat filesystem, getting error "Both FATs appear to be corrupt. Giving up."

    - by Nilesh
    i am trying format an FAT (or FAT32) file system on my ubuntu system, but i am not able to format the device, each time i am getting the error "Both FATs appear to be corrupt. Giving up." i have tried all options like sudo dosfsck -t -a -w /dev/sdc1 sudo dosfsck -w -r -l -a -v -t /dev/sdc1 but each time the same message comes, can any one guide me how to recover the filesystem, also i don't mind losing data of this drive as this is an external pen drive. Also, can u pl suggest of some method other then booting from a CD with software like GPARTED or something like that.

    Read the article

  • Webcast: Introducing the New Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    The Fastest Way to Virtualize Your Datacenter Join our webcast “Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM” Tues., January 25, 2011 9 a.m. PT / 12 p.m. ET Presented by: Marc Shelley, Senior Manager, Oracle Blades Product Management Tom Lisjac, Senior Member, Oracle Technical Staff Register now for our live webcast! The Oracle VM blade cluster reference configuration addresses the key challenges associated with deploying a virtualization infrastructure. It eliminates or significantly reduces the assembly and integration of the following components BY UP TO 98%: Servers Storage Network Connections Virtualization Software Operating Systems Attend this fact-filled, how-to Webcast with Oracle experts to learn the best practices for deploying the reference configuration for Oracle VM Server for x86 and Sun Blade and Sun Fire x86 rack mount servers. Virtualization is easier than ever with this new configuration. Register now for our live webcast! For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Oracle Exalogic Elastic Cloud - Planned Webcasts

    - by chuck.speaks
    I’m putting together a collection of recorded webcasts around Oracle Exalogic Elastic Cloud (Exalogic).  The plan is to do a systems overview and then multiple deep dives into hardware and software components that make up the engineered system. Those of you that are members of our partner community (Oracle Partner Network), drop me a note if you are interested in a full blown in-class delivery via PTS resources.  There is no schedule for these workshops but if there is enough interest, I would venture to guess it would roll out soon. Those of you with applications certified on Oracle WebLogic server that would like to scale to Exalogic, see me or watch this space.   Chuck Speaks chuck <dot> speaks at oracle <dot> com

    Read the article

  • LSP vs OCP / Liskov Substitution VS Open Close

    - by Kolyunya
    I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar, derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Which tool to use for "home banking"?

    - by Huygens
    I would like to manage my bank accounts in a secure manner on Ubuntu. I saw several applications in the Software Centre, but I don't know which one to choose. I don't need fancy features like stock options. I just have regular accounts which I want to follow, I don't want complicated stuff. As bank data are quite sensitive, I would highly prefer an application that does encryption of the data. Though, if you have a really cool app but it does not have this feature, as long as it offers to store the data in one dedicated place, I could do with encrypting that place. So what tool do you use that could fit my needs?

    Read the article

  • Google I/O 2010 - Building your own Google Wave provider

    Google I/O 2010 - Building your own Google Wave provider Google I/O 2010 - Open source Google Wave: Building your own wave provider Wave 101 Dan Peterson, Jochen Bekmann, JD Zamfirescu Pereira, David LaPalomento (Novell) Learn how to build your own wave service. Google is open sourcing the lion's share of the code that went into creating Google Wave to help bootstrap a network of federated providers. This talk will discuss the state of the reference implementation: the software architecture, how you can plug it into your own use cases -- and how you can contribute to the code and definition of the underlying specification. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 59:03 More in Science & Technology

    Read the article

  • Visual Studio 2010 Is Here!

    I think back to the days of the first versions of Visual Studio (when it was called Visual Studio .NET, remember?) and I think about how far Microsoft has come with this IDE. It really is the best IDE on the market. There is so much to this IDE it is amazing. It now can really handle managing your complete software application development lifecycle. For me, it is (besides Windows 7) the best and most successful product Microsoft has developed. You can obviously get this now and it is available on...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do I get my ART USB Dual Pre preamp to work?

    - by Zach
    I am using Audacity. I have an ART USB Dual Pre preamp. Ubuntu is not recognizing it whatsoever. I am able to record in Audacity, but it is using the mic that is built into my computer (which is a compaq Presario CQ50) instead of the one plugged into the preamp. How do I get Ubuntu to recognize the preamp that is plugged into my computer? Something tells me it has to do with the installation of the preamp software. It came with a installation CD, but when I go to "install", the nothing happens. I can view what is on the CD, but there is no installing of anything. Please help!

    Read the article

  • How do you Install the Latest Release of Miro?

    - by Brenton Horne
    In the software centre the latest release of Miro available is 4.0.4 whereas the latest release of Miro is 5.0.4. How do I download 5.0.4 on 12.10? I have tried following the guide at http://www.getmiro.com/download/for-ubuntu/ (and thus have already run sudo add-apt-repository ppa:pcf/miro-releases) but it failed and when I tried to run sudo apt-get update I received the error: W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/pcf/miro-releases/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >