Search Results

Search found 31499 results on 1260 pages for 'database theory'.

Page 84/1260 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • ISO 12207 - testing being only validation activity? [closed]

    - by user970696
    Possible Duplicate: How come verification does not include actual testing? ISO norm 12207 states that testing is only validation activity, while all static inspections are verification (that requirement, code.. is complete, correct..). I did found some articles saying its not correct but you know, it is not "official". I would like to understand because there are two different concepts (in books & articles): 1) Verification is all testing except for UAT (because only user can really validate the use). E.g. here OR 2) Verification is everything but testing. All testing is validation. E.g. here Definitions are mostly the same, as Sommerville's: The aim of verification is to check that the software meets its stated functional and non-functional requirements. Validation, however, is a more general process. The aim of validation is to ensure that the software meets the customer’s expectations. It goes beyond simply checking conformance with the specification to demonstrating that the software does what the customer expects it to do It is really bugging me because I tend to agree that functional testing done on a product (SIT) is still verification because I just follow the requirements. But ISO does not agree..

    Read the article

  • How often do CPUs make calculation errors?

    - by veryfoolish
    In Dijkstra's Notes on Structured Programming he talks a lot about the provability of computer programs as abstract entities. As a corollary, he remarks how testing isn't enough. E.g., he points out the fact that it would be impossible to test a multiplication function f(x,y) = x*y for any large values of x and y across the entire ranges of x and y. My question concerns his misc. remarks on "lousy hardware". I know the essay was written in the 1970s when computer hardware was less reliable, but computers still aren't perfect, so they must make calculation mistakes sometimes. Does anybody know how often this happens or if there are any statistics on this?

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Database Deployment: The Bits - Copying Data Out

    Occasionally, when deploying a database, you need to copy data out to file from all the tables in a database. Phil Factor shows how to do it, and illustrates its use by copying an entire database from one server to another. SQL Backup Pro wins Gold Community Choice AwardFind out why the SQL Server Community voted SQL Backup Pro 'Best Backup and Recovery Product 2012'. Get faster, smaller, fully verified backups. Download a free trial now.

    Read the article

  • Validation and Verification explanation (Boehm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirements baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Bug severity classification issues

    - by KyleMinn
    In a book I have, there is a following classification of defect: Critical : A defect receives a “critical” severity level if one or more critical system functionalities are impaired by a defect with is impaired and there is no workaround. High: A defect receives a “high” severity level if some fundamental system functionalities are impaired but a workaround exists. Medium: A defect receives a “medium” severity level if no critical functionality is impaired and a workaround exists for the defect. Low: A defect receives a “low” severity level if the problem involves a cosmetic feature of the system. To be honest, I do not get it.. For example point 2. What if fundamental but not critical feature is impaired and there is NOT a workaround. The same for point 3: what if no critical functionality is affected but there is no workaround? E.g. optional field in the registration form does not work. No workaround but barely an issue.

    Read the article

  • Copy Table to Another Database

    - by Derek Dieter
    There are few methods of copying a table to another database, depending on your situation. Same SQL Server Instance If trying to copy a table to a database that is on the same instance of SQL Server, The easiest solution is to use a SELECT INTO while using the fully qualifed database names.SELECT * INTO Database2.dbo.TargetTable FROM Database1.dbo.SourceTableThis will [...]

    Read the article

  • Database Insider - September 2012 issue

    - by Javier Puerta
    The September issue of the Database Insider newsletter is now available. (Full newsletter here) IT ROI CENTER - Oracle Exadata IT ROI Center: Next Steps for Transforming Your BusinessVisit Oracle’s IT ROI Center to discover how customers are using Oracle Exadata to improve efficiency, increase service levels, raise employee productivity, and enable faster time to market—all with lower IT costs CUSTOMER BUZZ 30 Times Performance Improvement at P&G with Oracle Exadata BNP Paribas Runs Global Trading 17 Times Faster with Oracle Exadata Banco Santander (Brasil) S.A. Transforms Data Center with Oracle Exadata FEATURED TRAINING On Demand Training: Oracle Exadata Database Machine Learn about Oracle Exadata Database Machine today using Oracle University’s video streaming training on demand. View a free sample video of the Oracle Exadata Database Machine course. 

    Read the article

  • Renaming a Published SQL Server Database

    I have transactional replication configured in production. I am wondering if we could rename the publication database in transactional replication without having to drop and recreate the replication set up. Also, is it possible to rename the database files of the publication database without affecting the replication configuration. Get Smart with SQL Backup Pro Powerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school Discover why.

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • How to use lists in equivalence partitioning?

    - by KhDonen
    I have read that equivalence partitioning can be used typically for intervals or lists, e.g. I assume it can be used for every set of inputs. Anyway if the requirement says that allowed colors are (RED,BLUE,BLACK, GREEN), I cannot treat them like a list, right? I mean, testing one of them would not be enough because developers most likely used some switch-case and thus it is not real "set" where one could represent also the others. So how it is meant with lists? Also what is not that clear to me, I do not think it is always possible to do the initial partioning and then design the test cases. What about checking two lines intersection: Y=MX+C. (two inputs) 1) The lines are paraller. M1=M1 but C1 must be different from C2. 2) Lines are intersecting. M1 must be different from M2. 3) Coincident. The are the same. How can I use partitioning here? THis is actually taken from a book and it says that these sets are eq.classes.

    Read the article

  • Is there such a thing these days as programming in the small?

    - by WeNeedAnswers
    With all the programming languages that are out there, what exactly does it mean to program in the small and is it still possible, without the possibility of re-purposing to the large. The original article which mentions in the small was dated to 1975 and referred to scripting languages (as glue languages). Maybe I am missing the point, but any language that you can built components of code out of, I would regard to being able to handle "in the large". Is there a confusion on what Objects are and do they really figure as being mandatory to being able to handle "the large". Many have argued that this is the true meaning of "In the large" and that the concepts of objects are best fit for the job.

    Read the article

  • Quality Assurance=inspections, reviews..?

    - by user970696
    Studying this subject extensively, the most books state the following: Quality Assurance: prevention activity. Act of inspection, reviewing.. Quality Control: testing While there are some exceptions that mention that QA deals with just processes (planning, strategy, standard application etc.) which is IMHO much closer to real QA, yet I cannot find any good reference in Google Books. I believe that inspections, reviews, testing is all quality control as it is about checking products, no matter if it is the final one or work products. The problem is that so many authors do not agree. I would be grateful for detailed explanation, ideally with a reference.

    Read the article

  • Legal Applications of Metamorphic Code

    - by V_P
    Firstly, I would like to state that I already understand the 'vx' applications for Metamorphic code. I am not here to ask a question related to any of those topics as that would be inappropriate in this context. I would like to know if anyone has ever used 'Metamorphic' code in practice, for purposes other than those previously stated, if so, what was the reasoning for using said concept. In essence I am trying to discover a purpose for this concept, if any, other than circumventing anti-virus scanners and the like.

    Read the article

  • Database Insider - December 2012 issue

    - by Javier Puerta
    The December issue of the Database Insider newsletter is now available. (Full newsletter here) Big Data: From Acquisition to Analysis 2012 will likely be remembered as the year of big data, as a new generation of technologies enables organizations to acquire, organize, and analyze the exponentially growing and typically less-structured data generated from a variety of new sources. Oracle has produced a series of five short videos that offer a quick and compelling high-level introduction to big data. Read More Total Cost of Ownership Comparison: Oracle Exadata vs. IBM P-Series Read the research that found that over three years, the IBM hardware running Oracle Database cost 31 percent more in total cost of ownership than Oracle Exadata. Webcast - Oracle Exadata Database Machine X3 Learn about Oracle’s next-generation database machine, Oracle Exadata X3, that combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Available in an eight-rack configuration, it allows you to start small and grow.    Maximum Availability with Oracle GoldenGate Discover how to eliminate not only unplanned downtime but also planned downtime resulting from database upgrades, migrations, and consolidation.Thursday, December 1319:00 CET / 6 pm. UK   

    Read the article

  • Very original V&V explanation (Bohm) - I cannot understand its point

    - by user970696
    Hopefully my last thread about V&V as I found the B.Boehm is text which I just do not understand well (likely my technical English is not that good). http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf Basically he says that verification is about checking that products derived from requirments baseline must correspond to it and that deviation leads only to changes in these derived products (design, code). But he says it begins with design and ends with acceptance tests (you can check the V model inside). The thing is, I have accepted ISO12207 in terms of all testing is validation, yet it does not make any sense here. In order to be sure the product complies with requirements (acceptance test) I need to test it. Also it says that validation problems means that requirements are bad and needs to be changed - which does not happen with testing that testers do, who just checks correspondence with requirements.

    Read the article

  • Restoring a Publisher Database in SQL Server

    Introduction Restoring any database is a critical task which will be  complicated by the database to be  restored being a publisher database. For the purposes of this article, I will assume familiarity with the different types ... [Read Full Article]

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >