Search Results

Search found 73253 results on 2931 pages for 'enterprise data quality solutions and news'.

Page 112/2931 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Getting out of my head

    - by BenCole
    (I put this on SO, but it got a couple close votes saying it belonged here instead...) I've spent the last year as a single person team developing a rich-client application (35,000+ LoC, for what it's worth). It's currently stable and in production. However, I know that my skills were rusty at the beginning of the project, so without a doubt there are major issues to the code. At this point, most of the issues are in architecture, structure, or interactions - the easy problems, even architecture/design problems, have already been weeded out. Unfortunately, I've spent so much time with this project that I'm having a hard time thinking outside of it - approaching it from a new perspective to see the flaws deeply buried or inherent in the design. How do I step outside my head and outside my code so I can get a fresh look at this code so I can make it better? Is this less of an issue than I think it is, or is this a problem for other people as well?

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • How can I sell a legacy program rewrite to the business?

    - by Wil
    We have a legacy classic ASP application that's been around since 2001. It badly needs to be re-written, but it's working fine from an end user perspective. The reason I feel like a rewrite is necessary is that when we need to update it (which is admittedly not that often) then it takes forever to go through all the spaghetti code and fix problems. Also, adding new features is also a pain since it was architect-ed and coded badly. I've run cost analysis for them on maintenance but they are willing to spend more for the small maintenance jobs than a rewrite. Any suggestions on convincing them otherwise?

    Read the article

  • Mark Hurd on the Customer Revolution: Oracle's Top 10 Insights

    - by Richard Lefebvre
    Reprint of an article from Forbes Businesses that fail to focus on customer experience will hear a giant sucking sound from their vanishing profitability. Because in today’s dynamic global marketplace, consumers now hold the power in the buyer-seller equation, and sellers need to revamp their strategy for this new world order. The ability to relentlessly deliver connected, personalized and rewarding customer experiences is rapidly becoming one of the primary sources of competitive advantage in today’s dynamic global marketplace. And the inability or unwillingness to realize that the customer is a company’s most important asset will lead, inevitably, to decline and failure. Welcome to the lifecycle of customer experience, in which consumers explore, engage, shop, buy, ask, compare, complain, socialize, exchange, and more across multiple channels with the unconditional expectation that each of those interactions will be completed in an efficient and personalized manner however, wherever, and whenever the customer wants. While many niche companies are offering point solutions within that sprawling and complex spectrum of needs and requirements, businesses looking to deliver superb customer experiences are still left having to do multiple product evaluations, multiple contract negotiations, multiple test projects, multiple deployments, and–perhaps most annoying of all–multiple and never-ending integration projects to string together all those niche products from all those niche vendors. With its new suite of customer-experience solutions, Oracle believes it can help companies unravel these challenges and move at the speed of their customers, anticipating their needs and desires and creating enduring and profitable relationships. Those solutions span the full range of marketing, selling, commerce, service, listening/insights, and social and collaboration tools for employees. When Oracle launched its suite of Customer Experience solutions at a recent event in New York City, president Mark Hurd analyzed the customer experience revolution taking place and presented Oracle’s strategy for empowering companies to capitalize on this important market shift. From Hurd’s presentation and related materials, I’ve extracted a list of Hurd’s Top 10 Insights into the Customer Revolution. 1. Please Don’t Feed the Competitor’s Pipeline!After enduring a poor experience, 89% of consumers say they would immediately take their business to your competitor. (Except where noted, the source for these findings is the 2011 Customer Experience Impact (CEI) Report including a survey commissioned by RightNow (acquired by Oracle in March 2012) and conducted by Harris Interactive.) 2. The Addressable Market Is Massive. Only 1% of consumers say their expectations were always met by their actual experiences. 3. They’re Willing to Pay More! In return for a great experience, 86% of consumers say they’ll pay up to 25% more. 4. The Social Media Microphone Is Always Live. After suffering through a poor experience, more than 25% of consumers said they posted a negative comment on Twitter or Facebook or other social media sites. Conversely, of those consumers who got a response after complaining, 22% posted positive comments about the company. 5.  The New Deal Is Never Done: Embrace the Entire Customer Lifecycle. An appropriately active and engaged relationship, says Hurd, extends across every step of the entire processs: need, research, select, purchase, receive, use, maintain, and recommend. 6. The 360-Degree Commitment. Customers want to do business with companies that actively and openly demonstrate the desire to establish strong and seamless connections across employees, the company, and the customer, says research firm Temkin Group in its report called “The CX Competencies.” 7. Understand the Emotional Drivers Behind Brand Love. What makes consumers fall in love with a brand? Among the top factors are friendly employees and customer reps (73%), easy access to information and support (55%), and personalized experiences, such as when companies know precisely what products or services customers have purchased in the past and what issues those customers have raised (36%). 8.  The Importance of Immediate Action. You’ve got one week to respond–and then the opportunity’s lost. If your company needs more than a week to answer a prospect’s question or request, most of those prospects will terminate the relationship. 9.  Want More Revenue, Less Churn, and More Referrals? Then improve the overall customer experience: Forrester’s research says that approach put an extra $900 million in the pockets of wireless service providers, $800 million for hotels, and $400 million for airlines. 10. The Formula for CX Success.  Hurd says it includes three elegantly interlaced factors: Connected Engagement, to personalize the experience; Actionable Insight, to maximize the engagement; and Optimized Execution, to deliver on the promise of value. RECOMMENDED READING: The Top 10 Strategic CIO Issues For 2013 Wal-Mart, Amazon, eBay: Who’s the Speed King of Retail? Career Suicide and the CIO: 4 Deadly New Threats Memo to Marc Benioff: Social Is a Tool, Not an App

    Read the article

  • What is the politically correct way of refactoring other's code?

    - by dukeofgaming
    I'm currently working in a geographically distributed team in a big company. Everybody is just focused on today's tasks and getting things done, however this means sometimes things have to be done the quick way, and that causes problems... you know, same old, same old. I'm bumping into code with several smells such as: big functions pointless utility functions/methods (essentially just to save writing a word), overcomplicated algorithms, extremely big files that should be broken down into different files/classes (1,500+ lines), etc. What would be the best way of improving code without making other developers feel bad/wrong about any proposed improvements?

    Read the article

  • On Adopting the Mindset of an Enterprise DBA

    Although many of the important tasks a DBA has to perform should be done 'by hand', keying in commands or using SSMS, the canny DBA with a heavy workload will always have an eye to automating routine tasks wherever possible, or using a tool. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Verification of requirements question

    - by user970696
    Doing a lot of reading about V&V, I would need to clarify the following. A lot of definitons (less formal ones found in books) define verification like that: Verification: The software should conform to its specification. But then they speak about requirement verification, design verification etc. If I say that these items are "software" in terms of applying the definitons, what should I checked them against, what specification should requirements, which is the basic information, conform to? And one more thing: shouldnt be requirements also validated? To make sure they meets the customer needs? All texts I have speak only about SW validation on the end of the dev.process..

    Read the article

  • One-week release cycle: how do I make this feasible?

    - by Arkaaito
    At my company (3-yr-old web industry startup), we have frequent problems with the product team saying "aaaah this is a crisis patch it now!" (doesn't everybody?) This has an impact on the productivity (and morale) of engineering staff, self included. Management has spent some time thinking about how to reduce the frequency of these same-day requests and has come up with the solution that we are going to have a release every week. (Previously we'd been doing one every two weeks, which usually slipped by a couple of days or so.) There are 13 developers and 6 local / 9 offshore testers; the theory is that only 4 developers (and all testers) will work on even-numbered releases, unless a piece of work comes up that really requires some specific expertise from one of the other devs. Each cycle will contain two days of dev work and two days of QA work (plus 1 day of scoping / triage / ...). My questions are: (a) Does anyone have experience with this length of release cycle? (b) Has anyone heard of this length of release cycle even being attempted? (c) If (a) or (b), how on Earth do you make it work? (Any pitfalls to avoid, etc., are also appreciated.) (d) How can we minimize the damage if this effort fails?

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • After how much line of code a function should be break down?

    - by Sumeet
    While working on existing code base, I usually come across procedures that contain Abusive use of IF and Switch statements. The procedures consist of overwhelming code, which I think require re-factoring badly. The situation gets worse when I identify that some of these are recursive as well. But this is always a matter of debate as the code is working fine and no one wants to wake up the dragon. But, everyone accepts it is very expensive code to manage. I am wondering if are any recommendations to determine if a particular Method is a culprit and needs a revisit/rewrite , so that it can broken down or polymophized in an effective manner. Are there any Metrics (like no. of lines in procedure) that can be used to identify such segment of code. The checklist or advice to convince everyone, will be great!

    Read the article

  • Code structure for multiple applications with a common core

    - by Azrael Seraphin
    I want to create two applications that will have a lot of common functionality. Basically, one system is a more advanced version of the other system. Let's call them Simple and Advanced. The Advanced system will add to, extend, alter and sometimes replace the functionality of the Simple system. For instance, the Advanced system will add new classes, add properties and methods to existing Simple classes, change the behavior of classes, etc. Initially I was thinking that the Advanced classes simply inherited from the Simple classes but I can see the functionality diverging quite significantly as development progresses, even while maintaining a core base functionality. For instance, the Simple system might have a Project class with a Sponsor property whereas the Advanced system has a list of Project.Sponsors. It seems poor practice to inherit from a class and then hide, alter or throw away significant parts of its features. An alternative is just to run two separate code bases and copy the common code between them but that seems inefficient, archaic and fraught with peril. Surely we have moved beyond the days of "copy-and-paste inheritance". Another way to structure it would be to use partial classes and have three projects: Core which has the common functionality, Simple which extends the Core partial classes for the simple system, and Advanced which also extends the Core partial classes for the advanced system. Plus having three test projects as well for each system. This seems like a cleaner approach. What would be the best way to structure the solution/projects/code to create two versions of a similar system? Let's say I later want to create a third system called Extreme, largely based on the Advanced system. Do I then create an AdvancedCore project which both Advanced and Extreme extend using partial classes? Is there a better way to do this? If it matters, this is likely to be a C#/MVC system but I'd be happy to do this in any language/framework that is suitable.

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • Are flag variables an absolute evil?

    - by dukeofgaming
    I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead. Edit: Not talking about compiler flags

    Read the article

  • Does using structure data semantic LocalBusiness schema markup work for local EMD URL's?

    - by ElHaix
    Based on what I have read about Google's recent Panda and Penguin updates, I'm getting the impression that using semantic markup may help improve SEO results. On a EMD (exact match domain) site, that may have been hit, we list location-based products. We are now going to be adding a itemtype="http://schema.org/Product" to each product, with relevant details. However, that product may be available in Los Angeles and also in appear in a Seattle results page. We could add a LocalBusiness item type on each geo page to define the geo location for that page. While the definition states: A particular physical business or branch of an organization. Examples of LocalBusiness include a restaurant, a particular branch of a restaurant chain, a branch of a bank, a medical practice, a club, a bowling alley, etc. We could add use the location property which would simply include the city/state details. I realize that this looks like it is meant for a physical location, however could this be done without seeming black-hat?

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Tools for Enterprise Architects: OmniGraffle for iPad?

    - by pat.shepherd
    Well, I have to admit to being a bit of an Apple fan and, of course, and early adopter of gadgets and technology in general.  So, when FedEx showed up with my iPad 3G last week, I was a kid in a candy store.  One of the apps that my “buy finger” was hovering over for a while (like all of 3 days) was Omnigraffle for the iPad.  I imagined that it would be very cool to use this with a customer’s EA’s to sketch out Business, Application, Information and Technology architectures.  Instead of using the blackboard, this seemed to offer promise as a white-boarding tool with obvious benefits over a traditional white-board.  I figured I’d get a VGA adapter, plug it into the customer’s projector and off we would go with a great JAD tool.  The touch pad approach offered an additional hands-on kind of feel. So, I made the $49.99 purchase + the $29.99 VGA adapter and tried to give it a go.  Well, I was both pleasantly and unpleasantly surprised.  It is both powerful and easy to use.  There are great stencils included for shapes, software icons, Visio shapes, and even UML notation.  There is even a free-hand tool that works well.  I created some diagrams pretty quickly.   The one below was just a test and took all of 10 minuets to do. The only problem was that Onmigraffle does not recognize the VGA output, so I was stopped dead in my tracks, as it were.  My use case was as a collaborative diagramming tool with other architects, though I can still use it off line.  I called Omnigraffle and they said that VGA support is on the feature request list so, hopefully, in a short amount of time, I can use the tool as I envisioned.   Review: Criteria Result Is it fun? Yes Is it Useful? Yes Does it Show Promise? Yes Did the VGA Output Work? No File/diagram Formats PDF, Onmigraffle proprietary, image   Quick Sample:     OmniGraffle for iPad - Products - The Omni Group

    Read the article

  • Which reference provides your definition of "elegant" or "beautiful" code?

    - by Donnied
    This question is phrased in a very specific way - it asks for references. There was a similar question posted which was closed because it was considered a duplicate to a good code question. The Programmers FAQ points out that answers should have references - or its just an unproductive sharing of (seemingly) baseless opinions. There is a difference between shortest code and most elegant code. This becomes clear in several seminal texts: Dijkstra, E. W. (1972). The humble programmer. Communications of the ACM, 15(10), 859–866. Kernighan, B. W., & Plauger, P. J. (1974). Programming style: Examples and counterexamples. ACM Comput. Surv., 6(4), 303–319. Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111. doi:10.1093/comjnl/27.2.97 They all note the importance of clarity over brevity. Kernighan & Plauger (1974) provide descriptions of "good" code, but "good code" is certainly not synonymous with "elegant". Knuth (1984) describes the impo rtance of exposition and "excellence of style" to elegant programs. He cites Hoare - who describes that code should be self documenting. Dijkstra (1972) indicates that beautiful programs optimize efficiency but are not opaque. This sort of conversation is qulaitatively different than a random sharing of opinions. Therefore, the question - Which reference provides your definition of "elegant" or "beautiful" code? "Which *reference*" is not subjective - anything else will most likely shut the thread down, so please supply *references* not opinions.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • C: What is a good source to teach standard/basic code conventions to someone newly learning the language ?

    - by shan23
    I'm tutoring someone who can be described as a rank newcomer in C. Understandably, she does not know much about coding conventions generally practiced, and hence all her programs tend to use single letter vars, mismatched spacing/indentation and the like, making it very difficult to read/debug her endeavors. My question is, is there a link/set of guidelines and examples which she can use for adopting basic code conventions ? It should not be too arcane as to scare her off, yet inclusive enough to have the basics covered (so that no one woulc wince looking at the code). Any suggestions ?

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by Mike.Hallett(at)Oracle-BI&EPM
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue:  Oracles London CITY Moorgate Offices Places are limited, Register from this Link {see Register button at bottom right of page}. Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.     Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day)   For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected].

    Read the article

  • Enterprise Trade Compliance: Changing Trade Operations around the World

    - by John Murphy
    We live in a world of incredible bounty and speed where any product can be delivered anywhere on earth. However, our world is also filled with challenges for business – where volatility, uncertainty, risk, and chaos are our daily companions. To prosper amid the realities of this new world, organizations cannot rely on old strategies; they need new business models. Key trends within the global economy are mandating that companies fully integrate global trade management best practices within broader supply chain management strategies, rather than simply leaving it as a discrete event at the end of the order or procurement cycle. To explain, many companies face a complicated and changing compliance environment. This is directly linked to the speed and configuration of the supply chain, particularly with the explosion of new markets, shorter service cycles and ship times, accelerating rates of globalization and outsourcing, and increasing product complexity and regulation. Read More...

    Read the article

  • System testing - making sure the system conforms to specification. Validation?

    - by user970696
    After weeks of research I have nearly completed my thesis, yet I am unable to clear up my confusion contained in all previous threads here (and in many books): During system testing, we check the system function against system analysis (functional system design) - but that would fit to a definition of verification according to many books. But I follow ISO12207, which considers all testing as validation (making sure work product meets requirement for intended use). How can I justify that unit testing or system testing is validation, even though when I check it against specification? Which fullfils the definiton of verification? When testing that e.g. "Save button" works, is it validation? This picture shows my understanding of V&V, so different from many other sources, including ISTQB etc. Essential problem I have is that a book using the same picture also states on another place that: test activities in the area of validation are usability, alpha and beta testing. For verification, testable system requirements are defined whose correct implementation can be tested through system tests. Isn't that the opposite of what the picture says? Most books present the following picture, where validation is just making sure that customer needs are satisfied. Mind you that according to ISO, validation activity is testing.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >