Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 11/500 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How to improve testing your own code

    - by Peter
    Hi guys, Today I checked in a change on some code which turned out to be not working at all due to something rather stupid yet very crucial. I feel really bad about it and I hope I finally learn something from it. The stupid thing is, I've done these things before and I always tell myself, next time I won't be so stupid... Then it happens again and I feel even worse about it. I know you should keep your chin up and learn from your mistakes but here's the thing: I try to improve myself, I just don't see how I can prevent these things from happening. So, now I'm asking you guys: Do you have certain groundrules when testing your code?

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • Scrum got specific ways for testing software?

    - by joker13
    When reading Scrum Guide, as the official text for scrum, I find out there is no specific solution to provide software testing in scrum. (the only hint is on page15) I'm a little vague on whether scrum is considered a software development methodology or not? If it is not, then how come some of its practices opposes Extreme Programming? (I know that in scrum guide, the author notes that scrum is a framework not a methodology, but still I'm not pretty clear on that) And what's more, I'm not sure if there are any other important textbook that I'm missing so far about scrum. I need them to be official or of great deal of public acceptance.

    Read the article

  • Testing controller logic that uses ISession directly

    - by Rippo
    I have just read this blog post from Jimmy Bogard and was drawn to this comment. Where this falls down is when a component doesn’t support a given layering/architecture. But even with NHibernate, I just use the ISession directly in the controller action these days. Why make things complicated? I then commented on the post and ask this question:- My question here is what options would you have testing the controller logic IF you do not mock out the NHibernate ISession. I am curious what options we have if we utilise the ISession directly on the controller?

    Read the article

  • Cross-Platform Automated Mobile Application UI Testing

    - by thetaspark
    My dissertation is about developing a tool for testing mobile applications from the GUI. Primary device is Android, but it should support Blackberry or iOs etc. I found some frameworks using Google, e.g MonkeyTalk. I am not so sure, but what I want to develop might be a mini MonkeyTalk, with minimal functionality, focused on the GUI of the application(s) to be tested. My questions: What framework(s)? I am good with Java. Can I use the xUnit family for this, and how? What should I be reading/studying? Any suggestions, links for tutorials, documentations, howtos, etc., would be very helpful. Thanks in advance.

    Read the article

  • If you should only have one assertion per test; how to test multiple inputs?

    - by speg
    I'm trying to build up some test cases, and have read that you should try and limit the number of assertions per test case. So my question is, what is the best way to go about testing a function w/ multiple inputs. For example, I have a function that parses a string from the user and returns the number of minutes. The string can be in the form "5w6h2d1m", where w, h, d, m correspond to the number of weeks, hours, days, and minutes. If I wanted to follow the '1 assertion per test rule' I'd have to make multiple tests for each variation of input? That seems silly so instead I just have something like: self.assertEqual(parse_date('5m'), 5) self.assertEqual(parse_date('5h'), 300) self.assertEqual(parse_date('5d') ,7200) self.assertEqual(parse_date('1d4h20m'), 1700) In the one test case. Is there a better way?

    Read the article

  • Improve Bad testing

    - by SetiSeeker
    We have a large team of developers and testers. The ratio is one tester for every one developer. We have full bug tracking and reporting systems in place. We have test plans in place. Every change to the product, the testing team is involved in the design of the feature and are included in the development process as much as possible. We build in small iterative blocks, using scrum methodology and every scrum they are included in, including the grooming sessions etc. But every release of the product, they miss even the most simple and obvious defects. How can we improve this?

    Read the article

  • Has anyone used Genetify for A/B testing

    - by Joshak
    I'm working on a small project that will likely run on Wordpress, I always like to run some split testing to improve conversion rates for various goals. Typically if its a small site that I either don't have a budget for or want to keep it as inexpensive as possible I use Google Website Optimizer if I do have a budget I go with Visual Website Optimizer both are great and affordable, but for fun I was checking out alternatives and found Genetify which is an open source project and has some neat features. In searching around I don't see many people talking about it and wondered if anyone here has used it. If so what do you think about it?

    Read the article

  • Understanding how software testing works and what to test.

    - by RHaguiuda
    Intro: I've seen lots of topics here on SO about software testing and other terms I don't understand. Problem: As a beginner developer I, unfortunately, have no idea how software testing works, not even how to test a simple function. This is a shame, but thats the truth. I also hope this question can help others beginners developers too. Question: Can you help me to understand this subject a little bit more? Maybe some questions to start would help: When I develop a function, how should I test it? For example: when working with a sum function, should I test every input value possible or just some limits? How about testing functions with strings as parameters? In a big program, do I have to test every single piece of code of it? When you guys program do you test every code written? How automated test works and how can I try one? How tools for automated testing works and what they do? I`ve heard about unit testing. Can I have a brief explanation on this? What is a testing framework? If possible please post some code with examples to clarify the ideas. Any help on this topic is very welcome! Thanks.

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • Should I use a seperate class per test?

    - by user460667
    Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = "Hello World"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual("Hello World", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • What is the way to go to fake my database layer in a unit test?

    - by Michel
    Hi, i have a question about unit testing. say i have a controller with one create method which puts a new customer in the database: //code a bit shortened public actionresult Create(Formcollection formcollection){ client c = nwe client(); c.Name = formcollection["name"]; ClientService.Save(c); { Clientservice would call a datalayer object and save it in the database. What i do now is create a database testscript and set my database in a know condition before testing. So when i test this method in the unit test, i know that there must be one more client in the database, and what it's name is. In short: ClientController cc = new ClientController(); cc.Create(new FormCollection (){name="John"}); //i know i had 10 clients before assert.areEqual(11, ClientService.GetNumberOfClients()); //the last inserted one is John assert.areEqual("John", ClientService.GetAllClients()[10].Name); So i've read that unit testing should not be hitting the database, i've setup an IOC for the database classes, but then what? I can create a fake database class, and make it do nothing. But then ofcourse my assertions will not work because if i say GetNumberOfClients() it will alwasy return X because it has no interaction with the fake database class used in the Create Method. I can also create a List of Clients in the fake database class, but as there will be two different instance created (one in the controller action and one in the unit test), they will have no interaction. What is the way to make this unit test work without a database?

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • User Acceptance Testing Defect Classification when developing for an outside client

    - by DannyC
    I am involved in a large development project in which we (a very small start up) are developing for an outside client (a very large company). We recently received their first output from UAT testing of a fairly small iteration, which listed 12 'defects', triaged into three categories : Low, Medium and High. The issue we have is around whether everything in this list should be recorded as a 'defect' - some of the issues they found would be better described as refinements, or even 'nice-to-haves', and some we think are not defects at all. They client's QA lead says that it is standard for them to label every issues they identify as a defect, however, we are a bit uncomfortable about this. Whilst the relationship is good, we don't see a huge problem with this, but we are concerned that, if the relationship suffers in the future, these lists of 'defects' could prove costly for us. We don't want to come across as being difficult, or taking things too personally here, and we are happy to make all of the changes identified, however we are a bit concerned especially as there is a uneven power balance at play in our relationship. Are we being paranoid here? Or could we be setting ourselves up for problems down the line by agreeing to this classification?

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Do you use unit tests at work? What benefits do you get from them?

    - by Anonymous
    I had planned to study and apply unit testing to my code, but after talking with my colleagues, some of them suggested to me that it's not necessary and it has a very little benefit. They also claim that only a few companies actually do unit testing with production software. I am curious how people have applied unit testing at work and what benefits they are getting from using them, e.g., better code quality, reduced development time in the long term, etc.

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • What some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >