Search Results

Search found 50945 results on 2038 pages for 'web testing'.

Page 174/2038 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Comparing the Performance of Visual Studio&apos;s Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using <a href="http://www.4guysfromrolla.com/articles/120705-1.aspx">ASP.NET's Membership system</a> is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a <a href="http://www.4guysfromrolla.com/articles/100803-1.aspx">web service</a> generates markup that auto-creates a <i>proxy class</i>, which handles the low-level details of invoking the web service, serializing parameters,

    Read the article

  • Test driven development - convince me!

    - by Casebash
    I know some people are massive proponents of test driven development. I have used unit tests in the past, but only to test operations that can be tested easily or which I believe will quite possibly be correct. Complete or near complete code coverage sounds like it would take a lot of time. What projects do you use test-driven development for? Do you only use it for projects above a certain size? Should I be using it or not? Convince me!

    Read the article

  • Is there a way to emulate pinch-zoom?

    - by aking1012
    I'm looking for a way to emulate pinch zoom in either an android emulator(android SDK-less desirable) or a (preferred) native Ubuntu web browser that I can resize to a specified size for initial testing of HTML5 applications. This is would be useful for first round testing during cross-platform application development. Note: I'm trying to do this with no real touch-device only a mouse. So the best answer would be something like "Install this chromium plug-in and use this hotkey to set pinch points" or something similar. We already have this for getting dual mouse working(thanks AmithKK). The browser that supports multi-touch is the hard part. Something to note is that I start getting screen artifacts using multiple mice via that guide. They're mild and tolerable, but they are there.

    Read the article

  • NUnit SetUp and TearDown

    - by Lijo
    I have some experience in MS Test but new to NUnit. Whether NUnit [Setup] is corresponding to [ClassInitialize] or [TestInitialize] in MS Test? What is the NUnit attribute corresponding to [TestInitialize]? REFERENCE: http://stackoverflow.com/questions/1873191/testinitialize-gets-fired-for-every-test-in-my-visual-studio-unit-tests http://stackoverflow.com/questions/4602288/nunit-testcontext-currentcontext-test-not-working

    Read the article

  • Automated tests for differencing algorithm

    - by Matthew Rodatus
    We are designing a differencing algorithm (based on Longest Common Subsequence) that compares a source text and a modified copy to extract the new content (i.e. content that is only in the modified copy). I'm currently compiling a library of test case data. We need to be able to run automated tests that verify the test cases, but we don't want to verify strict accuracy. Given the heuristic nature of our algorithm, we need our test pass/failures to be fuzzy. We want to specify a threshold of overlap between the desired result and the actual result (i.e. the content that is extracted). I have a few sketches in my mind as to how to solve this, but has anyone done this before? Does anyone have guidance or ideas about how to do this effectively?

    Read the article

  • What to do as a new team lead on a project with maintainability problems?

    - by Mr_E
    I have just been put in charge of a code project with maintainability problems. What things can I do to get the project on a stable footing? I find myself in a place where we are working with a very large multi-tiered .NET system that is missing a lot of the important things such as unit tests, IOC, MEF, too many static classes, pure datasets etc. I'm only 24 but I've been here for almost three years (this app has been in development for 5) and mostly due to time constraints we've been just adding in more crap to fit the other crap. After doing a number of projects in my free time I have begun to understand just how important all those concepts are. Also due to employee shifting I find myself to now be the team lead on this project and I really want to come up with some smart ways to improve this app. Ways where the value can be explained to management. I have ideas of what I would like to do but they all seem so overwhelming without much upfront gain. Any stories of how people have or would have dealt with this would be a very interesting read. Thanks.

    Read the article

  • Is deserializing complex objects instead of creating them a good idea, in test setup?

    - by Chris Bye
    I'm writing tests for a component that takes very complex objects as input. These tests are mixes of tests against already existing components, and test-first tests for new features. Instead of re-creating my input objects (this would be a large chunk of code) or reading one from our data store, I had the thought to serialize a live instance of one of these objects, and just deserialize it into test setup. I can't decide if this is a reasonable idea that will save effort in long run, or whether it's the worst idea that I've ever had, causing those that will maintain this code will hunt me down as soon as they read it. Is deserialization of inputs a valid means of test setup in some cases? To give a sense of scale of what I'm dealing with, the size of serialization output for one of these input objects is 93KB. Obtained by, in C#: new BinaryFormatter().Serialize((Stream)fileStream, myObject);

    Read the article

  • Service to test app on all the iPhones?

    - by David
    I have some developers creating an iPhone app, often the app will not work on one type of iPhone even though it worked on another one using the same version of iOS. Therefore, I am looking for a service where I can test the app natively on all the iPhone versions running various versions of iOS. I would like to be able to interact with the iPhones myself, so that I know that a specific bug has actually been fixed before, pushing to App Store and waiting 9 days for the review before I can hear the sad news from customers. Googling got me nowhere. Do such services exist?

    Read the article

  • Quality Assurance tools discrepancies

    - by Roudak
    It is a bit ironic, yesterday I answered a question related to this topic that was marked to be good and today I'm the one who asks. These are my thoughts and a question: Also let's agree on the terms: QA is a set of activities that defines and implements processes during SW development. The common tool is the process audit. However, my colleague at work agrees with the opinion that reviews and inspections are also quality assurance tools, although most sources classify them as quality control. I would say both sides are partially right: during inspections, we evaluate a physical product (clearly QC) but we see it as a white box so we can check its compliance with set processes (QA). Do you think it is the reason of the dichotomy among the authors? I know it is more like an academic question but it deserves the answer :)

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • Sounds Good...

    - by andyleonard
    Introduction This post is the twenty-ninth part of a ramble-rant about the software business. The current posts in this series are: Goodwill, Negative and Positive Visions, Quests, Missions Right, Wrong, and Style Follow Me Balance, Part 1 Balance, Part 2 Definition of a Great Team The 15-Minute Meeting Metaproblems: Drama The Right Question Software is Organic, Part 1 Metaproblem: Terror I Don't Work On My Car A Turning Point Human Doings Everything Changes Getting It Right The First Time One-Time...(read more)

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Part-time work as a beginner programmer [on hold]

    - by Valentas
    I wrote to one company near my university (starting in September) and they responded that they will probably hire me from the work I have already done (some projects and Euler problems solving). It's for 15 hours/week or so in order to not fall behind uni work. They require Python, SQL, XML and a good idea about how the Web works. The job role involves acquiring data from the Web and supplying it as search results for flight seekers (people). I am eager to learn but still, what can I do to become prepared for this? I ask because I tend to gravitate from one technology to the other, trying out things but never mastering it properly. What Web technologies are involved in such a job role? I have two months and want to learn as much as possible because there is much info but I have no idea where to start.

    Read the article

  • Where should Acceptance tests be written against?

    - by Jonn
    I'm starting to get into writing automated Acceptance tests and I'm quite confused where to write these tests against, specifically what layer in the app. Most examples I've seen are Acceptance tests written against the Domain but how about tests like: Given Incorrect Data When the user submits the form Then Play an Error Beep These seem to be fit for the UI and not for the Domain, or probably even the Service layer.

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • NZ Herald/ Yahoo Daily Deals Delivered via Web Slices in IE8

    Recently I have been having a lot of fun building a competition Slice of Kevin intended in part to teach people about Web Slices in New Zealand. The competition has been very successful and has created a loyal group of users. Since launch 2 months ago we have had 26,843 users of the slice. Each of those users have spent on average 4.1 hrs each engaged inside the web slice over that time. We also reached out through social media and found that facebook worked as a perfect companion to the Web Slice...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Convince management that a specific product training is important to QA?

    - by Rahul
    I am leading a QA team of 10 people. we have been received the request for a training of a ETL dataware housing tool for QA, Support and Development. But however the management does not feel that it is important for QA to be involved in such a training as it is support and development team who will be involved ih developing or fixing the issues in the product. How do I convince the management that this training is very important from the QA perspective as this is the team that will find bugs and which will reduce the maintainance cost?

    Read the article

  • Need Help Hiring a Perfectionist Programmer [closed]

    - by Bryan Hadaway
    I understand my question may be in the gray area, but I'm not able to use the Meta to ask if this question is appropriate or not so I'll simply have to risk it. My project is complete in the sense that it's a fully functional, ready to go 1.0 version. However, that's not good enough for my standards. My expertise is in HTML/CSS, not jQuery and PHP. I'm looking for someone to refine every character of my code for quality, speed, security and compatibility. I want everything to be as bug free as possible for launch. So I need an expert programmer who's a perfectionist in their coding who cares about the quality of their work (not just making it work) to review and refine my code. I'm sure I can't outright post the project's details and hope for interested parties to contact me as that wouldn't be beneficial to the community so instead I'm looking for advice from programmers about where some of best places to hire quality programmers are and the best strategies to hire the right programmer. In other words, screening applicants off of craigslist isn't going to cut it for this project. Thanks

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • Is there any way to simulate a slow connection between my server and an iPad (without installing anything on the server)?

    - by Clay Nichols
    Some of our webapp users have difficulty on slower connections. I"m trying to get a better idea of what that "speed barrier is" so I'd like to be able to test a variety of connection speeds. I've found ways to do this on Windows but no on the iPad, so I'm looking more for some sort of proxy service that'll work with any device (not running ON that device) I did find an article about using the CharlesProxy and providing a connection to another device, but I was hoping for something simpler (need not be free) Constraints * We are on a shared server so we can't install anything and we are limited in our control over that server. * I'd like to test an iPad, Android Tablet, Windows PC.

    Read the article

  • Empirical evidence regarding testability

    - by Xodarap
    A google scholar search turns up numerous papers on testability, including models for computing testability, recommendations for how ones code can be more testable, etc. They all come with the assertion that more testable code is more stable, but I can't find any studies which actually demonstrate this. Can someone link me to a study evaluating the effect of testable code vs. quality? The closest I can find is Improving the Testability of Object Oriented Systems, which discusses the relationship between design flaws and testability.

    Read the article

  • Web Based School/College ERP

    - by Ashok
    We are planning to build a Web Based School/College ERP. The main problem we face is Hardware support. Since it is Web Based, it is not possible to implement Biometrics. But most of our clients do ask for Biometrics. I hope we need to use a desktop application to do that. Can you please give some suggestions for this? Another thing is, here we don't have stable internet connection. We frequently face disconnection. This is another problem for Web Based CRM. In HTML5 there is a feature called Offline storage. Is it possible to use this feature for such dynamic ERP? For example, let's say we need to enter marks for the students. Net got disconnected. Is it possible to use HTML5 offline feature to save the marks offline and upload them when we got connection back?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >