Search Results

Search found 65503 results on 2621 pages for 'real application testing'.

Page 104/2621 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • Are there any books dedicated to writing test code? [on hold]

    - by joshin4colours
    There are many programming books dedicated to useful programming and engineering topics, like working with legacy code or particular languages. The best of these books become "standard" or "canonical" references for professional programmers. Are there any books like this (or that could be like this) for writing test code? I don't mean books about Test-Driven Development, nor do I mean books about writing good (clean) code in general. I'm looking for books that discuss test code specifically (unit-level, integration-level, UI-level, design patterns, code structures and organization, etc.)

    Read the article

  • Looking software for making an animated cartoon to present a new application/scenario idea

    - by Skarab
    I have an idea for an application (+usage scenario) and I would like to create an animated cartoon that shows a use case for this application and its novelty. My company is a rather big so I am looking for an interesting way to get people know my idea to get feedback/get a green light to further develop it. Therefore I am looking for an application (free or commercial) that I could use to realize such an animated cartoon. I have posted this quesion before on stackoverflow, but I think this might be a better community to ask such a question.

    Read the article

  • Do you write common pre-conditions for a large number of unit test cases ?

    - by Vinoth Kumar
    I have heard/read writing common pre-conditions for a large number of test cases is a bad thing, since this dependency may cause large number of test cases to fail if something changes . What are your thoughts on it ? If this is so , then what exactly is the purpose of setUp() method in Junit that runs before each test case ? If the same code inside setUp() runs before each test case , why cant it run only once before running all the test cases together ?

    Read the article

  • Changing email application in Preferred Applications to GMail?

    - by grm
    I'm trying to change the Preferred Application for email. I have installed the package desktop-webmail, but there is no new option under System - Preferences - Preferred Application as you would expect, infact, there is only one option there, only Evolution. According to this post it should be possible to set a custom application, but no option is available. Is it possible to setup GMail as Preferred email app so that File - Send by email works in gnome apps? This seems to be a dup of another post here, Thing is that this works fine in 10.10, but in 11.04 this method no longer work. My post above is meant for 11.04 and the question is still valid.

    Read the article

  • Wine pollutes "Open With" application list

    - by Yi Jiang
    The dialog box in question here is the one you get with the context menu option "open with other applications". Wine seems to have inserted more than a dozen or so entries for each application I install, which makes it a pain to find the correct application: What can I do to remove the duplicates? Update: Neither of the two solutions really work. The bug is interesting, but the symptoms does not match my problem (I'm not having problem with uninstalling applications, but rather the things that are inserted after installing them), and with the other one, all references to the Wine application are removed, which actually makes the problem worse (although it may be an acceptable solution if nothing else can be found). So this is still an open question; any takers?

    Read the article

  • bug: deviation from requirements vs deviation from expectations

    - by user970696
    I am not clear on this one. No matter the terminology, in the end the software fault/bug causes (according to a lot of sources): Deviation from requirements Devation from expectations But if the expectations are not in requirements, then stakeholder could see a bug everywhere as he expected it to be like this or that..So how can I really know? I did read that specification can miss things and then of course its expected but not specified (by mistake).

    Read the article

  • Onsite Interview : QA Engineer with more Emphasis on Java Skills

    - by coolrockers2007
    Hello I'm having a onsite interview for QA engineer with Startup. While phone interview the person said he would want to test my JAVA, JUnit and SQL skills on white board with more importance on Object-oriented skills, So what all can i questions can i expect ? One more important issue : How do i overcome the fear of White board interview ?. I'm very bad at White board sessions, i get fully tensed. Please suggest me tips to overcome my jinx

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Should tests be in the same ruby file or in separeted ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Testcase runner for parametrized testcases

    - by Razer
    Let me explain my situation. I'm planning a kind of test case runner for doing testcases on external devices, which are microcontroller based. Lets consider the devices: Device 1 Device 2 There exist a lot of test cases which can be run with one of the devices above. For example: Testcase 1 Testcase 2 The main reason that all the testcases can be run with any device is, that the testcases validates some standard and this software should be extensible for future devices. The testcases itself must be runnable with changing parameters. For example Testcase 1 does some Timing Verification the testcase needs as input parameter the datarate: 4800, 9600, 19200. Now hoping you understand the situation, let me explain my design questions. For implementing the test cases I thought about an Attribute based approach, like nunit does it. The more complicated problem is, how to define the parametrized testcases? Like this: Device 1: Testcase 1: datarate: 4800, 9600, 19200 Testcase 2: supply: 1, 2, 3 Device 2: Testcase 1: datarate: 9600, 19200, 38400 Testcase 2: supply: 3, 4, 5 How would you design such a framework? I've done a similar desin in python where I had for every device a XML containing the testcase definitions like: <Testcase="Testcase 1" datarate=4800/> <Testcase="Testcase 1" datarate=9600/> <Testcase="Testcase 1" datarate=19200/>

    Read the article

  • How to handle bugs that I think I fixed, but I'm not entirely sure

    - by vsz
    There are some types of bugs which are very hard to reproduce, happen very rarely and seemingly by random. It can happen, that I find a possible cause, fix it, test the program, and can't reproduce the bug. However, as it was impossible to reliably reproduce the bug and it happened so rarely, how can I indicate this in a bugtracker? What is the common way of doing it? If I set the status to fixed, and the solution to fixed, it would mean something completely fixed, wouldn't it? Is it common practice to set the status to fixed and the solution to open, to indicate to the testers, that "it's probably fixed, but needs more attention to make sure" ? Edit: most (if not all) bugtrackers have two properties for the status of a bug, maybe the names are not the same. By status I mean new, assigned, fixed, closed, etc., and by solution I mean open (new), fixed, unsolvable, not reproducible, duplicate, not a bug, etc.

    Read the article

  • Proper way to measure the scalability of web Application

    - by Jorge
    Let's say that I have a web Application where i'm going to have 300 users and each one have to see data on real time, imagine that each client make an ajax call to the server to see in real time what's happens with the changes of the data, this calls are made each 300 ms per user. I know that i can run a simulation to see if the hardware of my server supports this example. But what happen's if the number of users start to grow up. Is there a way that i can measure the hardware needed to handle this growing behavior, a software, a formula, algorithm or maybe recommend me if i need to implement an distributed application with multiplies servers and balance the loads.

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • Online & Offline in Web Chat Application

    - by Mohammed Safeer
    I stuck amidst developing a chat web application using php for client side app. I used comet for chat application. And use technique of updating database when someone logout. Thus display offline on other side user. My problem is if someone close browser without logout, how the other side user know the person goes offline. How can i set online and offline icon in a php webchat application, when someone close chat window without logout? Is web sockets in php solve this problem? welcome all suggestions.

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • What are the processes of true Quality assurance?

    - by user970696
    Having read that Quality Assurance (QA) is focused on processes (while Quality Control (QC) is focused on the product), the books often mentions QA is the verification process - doing peer reviews, inspections etc. I still tend to think these are also QC as they check intermediate products. Elsewhere I have read that QA activity is e.g. choosing the right bugtracker. That sounds better to me in terms of process improvement. The question that close-voting person obviously missed is pretty clear: What are the activities that true QA should perform? I would appreciate the reference as I work on my thesis dealing with all these discrepancies and inconsistencies in the software quality world.

    Read the article

  • Architecture for subscription based application

    - by John
    This is about the architecture of my application I think. I have a Rails application where companies can administrate all things related to clients. Companies can buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my appplication/service. Thing is, what should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision I am to make involves security (e.g. user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc) For sake of maintainability, I tend to opt for the one code base. For the database I really don't know at this moment. So what do you think is the best option?

    Read the article

  • Registration free hosting for ASP.NET web service

    - by Andrew
    I've built a simple ASP.NET web service, tested it locally and would like to test it when externally hosted. Are there free hosting services available where I can just upload the assembly and service description file and test it straight away. Without registering the account, etc. My service does not do anything malicious and I am ok to run it in a restricted (security sandbox, bandwith, calls per second, etc) environment? I have heard about appharbor.com but it looks like an overkill to test a simple web service.

    Read the article

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • Verification of requirements question

    - by user970696
    Doing a lot of reading about V&V, I would need to clarify the following. A lot of definitons (less formal ones found in books) define verification like that: Verification: The software should conform to its specification. But then they speak about requirement verification, design verification etc. If I say that these items are "software" in terms of applying the definitons, what should I checked them against, what specification should requirements, which is the basic information, conform to? And one more thing: shouldnt be requirements also validated? To make sure they meets the customer needs? All texts I have speak only about SW validation on the end of the dev.process..

    Read the article

  • java.util.zip.ZipException: Error opening file When Deploying an Application to Weblogic Server

    - by lmestre
    The latest weeks we had a hard time trying to solve a deployment issue.* WebLogic Server 10.3.6* Target: WLS Cluster<21-10-2013 05:29:40 PM CLST> <Error> <Console> <BEA-240003> <Console encountered the following error weblogic.management.DeploymentException:        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:69)        at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)        at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)        at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)        at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)        at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)        at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)Caused by: java.util.zip.ZipException: Error opening file - C:\Oracle\Middleware\user_projects\domains\MyDomain\servers\MyServer\stage\myapp\myapp.war Message - error in opening zip file        at weblogic.servlet.utils.WarUtils.existsInWar(WarUtils.java:87)        at weblogic.servlet.utils.WarUtils.isWebServices(WarUtils.java:76)        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:61) So the first idea you have with that error is that the war file is corrupted or has incorrect privileges.        We tried:1. Unzipping the  war file, the file was perfect.2. Checking the size, same size as in other environments.3. Checking the ownership of the file, same as in other environments.4. Checking the permissions of the file, same as other applications.Then we accepted the file was fine, so we tried enabling some deployment debugs, but no clues.We also tried:1. Delete all contents of <MyDomain/servers/<MyServer>/tmp> a and <MyDomain/servers/<MyServer>/cache> folders, the issue persisted.2. When renaming the application the deployment was sucessful3. When targeting to the Admin Server, deployment was also working.4. Using 'Copy this application onto every target for me' didn't help either.Finally, my friend 'Test Case' solved the issue again.I saw this name in the config.xml<jdbc-system-resource>    <name>myapp</name>    <target></target>    <descriptor-file-name>jdbc/myapp-jdbc.xml</descriptor-file-name>  </jdbc-system-resource> So, it turned out that customer had created a DataSource with the same name as the application 'myapp' in the above example.By deleting the datasource and created another exact DataSource with a different name the issue was solved.At this point, Do you know Why 'java.util.zip.ZipException: Error opening file' was occurring?Because all names is WebLogic Server need to be unique.References: http://docs.oracle.com/cd/E23943_01/web.1111/e13709/setup.htm"Assigning Names to WebLogic Server ResourcesMake sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name." Enjoy!

    Read the article

  • Unit test: How best to provide an XML input?

    - by TheSilverBullet
    I need to write a unit test which validates the serialization of two attributes of an XML(size ~ 30 KB) file. What is the best way to provide an input for this test? Here are the options I have considered: Add the file to the project and use a file reader Pass the contents of the XML as a string Create the XML through a program and pass it Which is my best option and why? If there is another way which you think is better, I would love to hear it.

    Read the article

  • Link tracking: Amazon or Google way

    - by Howard
    When doing a shopping site, the best way is to reference some successful stores, like Amazon. In the area of link tracking, for example, to see which section of your frontpage yield better conversion: Amazon way: Generate an unique URL for each link in the frontpage, such as http://www.amazon.com/gp/product/B0083Q04IQ/ref=s9_pop_gw_g424_ir04/175-6575053-9292830?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-2&pf_rd_r=0AMJCKBBQA63EP0XHB86&pf_rd_t=101&pf_rd_p=1263340922&pf_rd_i=507846 Google way Use Google Analytics <a href="/products/abc" onClick="javascript: pageTracker._trackPageview('/from-main-menu/products/abc');"> WHat are the pros and cons with the above two approaches (besides Google require JS support)?

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >