Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 80/313 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • geb StaleElementReferenceException

    - by Brian Mortenson
    I have just started using geb with webdriver for automating testing. As I understand it, when I define content on a page, the page element should be looked up each time I invoke a content definition. //In the content block of SomeModule, which is part of a moduleList on the page: itemLoaded { waitFor{ !loading.displayed } } loading { $('.loading') } //in the page definition moduleItems {index -> moduleList SomeModule, $("#module-list > .item"), index} //in a test on this page def item = moduleItems(someIndex) assert item.itemLoaded So in this code, I think $('.loading') should be called repeatedly, to find the element on the page by its selector, within the context of the module's base element. Yet I sometimes get a StaleElementReference exception at this point. As far as I can tell, the element does not get removed from the page, but even if it does, that should not produce this exception unless $ is doing some caching behind the scenes, but if that were the case it would cause all sorts of other problems. Can someone help me understand what's happening here? Why is it possible to get a StaleElementReferenceException while looking up an element? A pointer to relevant documentation or geb source code would be useful as well.

    Read the article

  • How to emulate onLowMemory()?

    - by Samuh
    I have put some instructions in onLowMemory() callback and want to test the same. Is there a "direct" way to test onLowMemory function of the application subclass? Or will I have to just overload the phone by starting many apps and doing memory intensive tasks? Thanks.

    Read the article

  • Using Moq to Validate Separate Invocations with Distinct Arguments

    - by Thermite
    I'm trying to validate the values of arguments passed to subsequent mocked method invocations (of the same method), but cannot figure out a valid approach. A generic example follows: public class Foo { [Dependency] public Bar SomeBar { get; set; } public void SomeMethod() { this.SomeBar.SomeOtherMethod("baz"); this.SomeBar.SomeOtherMethod("bag"); } } public class Bar { public void SomeOtherMethod(string input) { } } public class MoqTest { [TestMethod] public void RunTest() { Mock<Bar> mock = new Mock<Bar>(); Foo f = new Foo(); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("baz"))); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("bag"))); // this of course overrides the first call f.SomeMethod(); mock.VerifyAll(); } } Using a Function in the Setup might be an option, but then it seems I'd be reduced to some sort of global variable to know which argument/iteration I'm verifying. Maybe I'm overlooking the obvious within the Moq framework?

    Read the article

  • Keeping iPhone App private after AppStore approval for beta testing.

    - by Stack
    I have messed around with AD-HOC distribution quite a bit and got it working too. The problem I am facing is all the people who I want to use as beta testers are "normal people" who never even sync their iPhone to iTunes on a computer. So, you can understand how technically challenged these people are, which is fine with me because that is the audience I want to use for testing. All these guys can do for me is if I can give them an AppStore link they will download it on their iPhone and test it for me. So, basically AD-HOC distribution (UDIDs, mobileprovision file and all that crap) is out of question for me. My Question is after AppStore approves my app, is there a way for me to be under the radar so that normal public can not download the app until I am ready. From past experience I know that the moment you put an app out there, in first week you get 100s of downloads and I dont want that to happen until my beta testing is finished.

    Read the article

  • Application test recommendation

    - by Polaris
    I never use unitests in my apps . I know that exists many technologies for testing .NET based application. (For example NUnit). Which of this tools more comfortable and more understandable to use. Please can you show the good articles where can I find information about unitests and understand key situation where I must use them?

    Read the article

  • JUnit - assertSame

    - by Michael
    Can someone tell me why assertSame() do fail when I use values 127? import static org.junit.Assert.*; ... @Test public void StationTest1() { .. assertSame(4, 4); // OK assertSame(10, 10); // OK assertSame(100, 100); // OK assertSame(127, 127); // OK assertSame(128, 128); // raises an junit.framework.AssertionFailedError! assertSame(((int) 128),((int) 128)); // also junit.framework.AssertionFailedError! } I'm using JUnit 4.8.1.

    Read the article

  • Unit Conversion from feet to meters

    - by user1742419
    I have to write a program that reads in a length in feet and inches and outputs the equivalent length in meters and centimeters. I have to create three functions: one for input, one or more for calculating, and one for output; And include a loop that lets the user repeat this computation for new input values until the user says he or she wants to end the program. I can't seem to get the input from one function to be used in the conversion function and then outputted by the next function. How do I do that? Thank you. #include <iostream> #include <conio.h> using namespace std; double leng; void length(double leng); double conv(double leng); void output(double leng); int main() { length(leng); conv(leng); output(leng); _getche(); return 0; } void length(double leng) { cout<<"Enter a length in feet, then enter a length in inches if needed: "; cin>>leng; return; } double conv(double leng) { return leng = leng * .3048; } void output(double leng) { cout<<"Your input is converted to "<<leng; return; }

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • HTML5 : version finale pour 2014, un report est nécessaire pour développer une suite de tests d'interopérabilité d'après le WC3

    HTML5 : version finale pour 2014 Un report est nécessaire pour développer une suite de tests d'interopérabilité d'après le WC3 Mise à jour du 15/02/2011 par Idelways L'HTML5 ne sera pas prêt avant 2014 d'après la nouvelle charte de son groupe de travail aux W3C, le consortium en charge des spécifications sur lequel s'appuiera le futur du développement Web. Le standard ouvert devrait atteindre le stade du « dernier appel » (Last Call) en mai prochain, une étape charnière qui correspond à la satisfaction des exigences techniques. Les communautés des développeurs seront dès lors appelées à commenter les spéci...

    Read the article

  • A TDD Journey: 2- Naming Tests; Mocking Frameworks; Dependency Injection

    Test-Driven Development (TDD) relies on the repetition of a very short development cycle Starting from an initially failing automated test that defines the functionality that is required, and then producing the minimum amount of code to pass that test, and finally refactoring the new code. Michael Sorens continues his introduction to TDD that is more of a journey in six parts, by implementing the first tests and introducing the topics of Test Naming, Mocking Frameworks and Dependency Injection

    Read the article

  • Is it OK to repeat code for unit tests?

    - by Pete
    I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod] public void MergeSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for(int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); MergeSort merge = new MergeSort(); merge.mergeSort(a, 0, a.Length - 1); CollectionAssert.AreEqual(a, b); } [TestMethod] public void InsertionSortAssertArrayIsSorted() { int[] a = new int[1000]; Random rand = new Random(DateTime.Now.Millisecond); for (int i = 0; i < a.Length; i++) { a[i] = rand.Next(Int16.MaxValue); } int[] b = new int[1000]; a.CopyTo(b, 0); List<int> temp = b.ToList(); temp.Sort(); b = temp.ToArray(); InsertionSort merge = new InsertionSort(); merge.insertionSort(a); CollectionAssert.AreEqual(a, b); }

    Read the article

  • HTML5 : version finale pour 2014, un report est nécessaire pour développer une suite de tests d'interopérabilité d'après le W3C

    HTML5 : version finale pour 2014 Un report est nécessaire pour développer une suite de tests d'interopérabilité d'après le WC3 Mise à jour du 15/02/2011 par Idelways L'HTML5 ne sera pas prêt avant 2014 d'après la nouvelle charte de son groupe de travail aux W3C, le consortium en charge des spécifications sur lequel s'appuiera le futur du développement Web. Le standard ouvert devrait atteindre le stade du « dernier appel » (Last Call) en mai prochain, une étape charnière qui correspond à la satisfaction des exigences techniques. Les communautés des développeurs seront dès lors appelées à commenter les spéci...

    Read the article

  • The Loser In Our Windows vs. Linux Tests: Intel Graphics

    <b>Phoronix:</b> "We are still working on the first part of our Windows 7 vs. Ubuntu 10.04 LTS benchmarks that are set to be published early next week, but so far there is one easy conclusion to draw from the completed tests: Intel's Linux graphics driver is still no match to the Intel Windows driver."

    Read the article

  • TDD - A question about the approach

    - by k25
    I have a question about TDD. I have always seen the recommendation that we should first write unit tests and then start writing code. But I feel that going the other way is much more comfortable (for me) - write code and then the unit tests, because I feel we have much more clarity after we have written the actual code. If I write the code and then the tests, I may have to change my code a little bit to make it testable, even if I concentrate much on creating a testable design. On the other hand, if I write the tests and then the code, the tests will change pretty frequently as and when the code shapes up. My questions are: 1) As I see a lot of recommendations to start writing tests and then move on to coding, what are the disadvantages if I do it the other way - write code and then the unit tests? 2) Could you please point me to some links that discuss about this or recommend some books (TDD)?

    Read the article

  • Python: Inheritance of a class attribute (list)

    - by Sano98
    Hi everyone, inheriting a class attribute from a super class and later changing the value for the subclass works fine: class Unit(object): value = 10 class Archer(Unit): pass print Unit.value print Archer.value Archer.value = 5 print Unit.value print Archer.value leads to the output: 10 10 10 5 which is just fine: Archer inherits the value from Unit, but when I change Archer's value, Unit's value remains untouched. Now, if the inherited value is a list, the shallow copy effect strikes and the value of the superclass is also affected: class Unit(object): listvalue = [10] class Archer(Unit): pass print Unit.listvalue print Archer.listvalue Archer.listvalue[0] = 5 print Unit.listvalue print Archer.listvalue Output: 10 10 5 5 Is there a way to "deep copy" a list when inheriting it from the super class? Many thanks Sano

    Read the article

  • Strategy game programming in Java, how the map and unit classes should work/relate each other.

    - by Gabriel A. Zorrilla
    As a way of learning Java i'm doing a little strategy game. You have a game map and military units. The gamemap and the units are Jpanels which are binded together in another class called GameWindow, which has a JLayeredPane where the gamepap is below the units. Every time i click a unit (and therefore, the JPane) I can get the unit's information. The problem comes when i want to move a unit... each unit is self drawn inside it's Jpanel, how do i represent the change in position over the gamemap? I'm thinking about reloading the whole game window based on a position change of a unit (because the JPAnels that represent the units are created in the GameWindow, not drawn in the gamemap). Another more elegant option, should be repainting the whole map, with the units. But that way the units would lose it's status of objects (no more JPAnels, just being drawn in the gamemap paint() method), and how would i tell if one unit is clicked? I dont know how people with more experience in games handle the class structure in strategy games. I believe i have the right structure (map, game units, each one an object) but I'm having difficulties making it work together. Here is a snapshot of the map with a couple of units:

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Selenium &ndash; Use Data Driven tests to run in multiple browsers and sizes

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/04/selenium-ndash-use-data-driven-tests-to-run-in-multiple.aspxSelenium uses WebDriver (or is it the same? I’m still learning how it is connected) to run Automated UI tests in many different browsers. For example, you can run the same test in Chrome and Firefox and in a smaller sized Chrome browser. The permutations can grow quickly. One way to get them to run in MStest is to create  a test method for each test (ie ChromeDeleteItem_Small, ChromeDeleteItem_Large, FFDeleteItem_Small, FFDeleteItem_Large) that each call  the same base method, passing in the browser and size you’d like. This approach was causing a lot of duplicate code, so I decided to use the data driven approach, common to Coded UI or Unit test methods. 1. Create a class with a test method. 2. Create a csv with two columns: BrowserType, BrowserSize 3. Add rows for each permutation: Chrome, Large | Chrome, Small | Firefox, Large | Firefox, Small | IE, Large | IE, Small | *** 4. Add the csv to the Visual Studio Project. 5. Set the Copy to output directory to Copy always 6. Add the attribute: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] 7. Run the test in the test explorer Example:[CodedUITest] public class AllTasksTests : TasksTestBase { [TestMethod] [TestCategory("Tasks")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\TestMatrix.csv", "TestMatrix#csv", DataAccessMethod.Sequential), DeploymentItem("TestMatrix.csv")] public void CreateTask() { this.PrepForDataDrivenTest(); base.CreateTaskTest("New Task"); } } protected void PrepForDataDrivenTest() { var browserType = this.ParseBrowserType(Context.DataRow["BrowserType"].ToString()); var browserSize = this.ParseBrowserSize(Context.DataRow["BrowserSize"].ToString()); this.BrowserType = browserType; this.BrowserSize = browserSize; Trace.WriteLine("browser: " + browserType.ToString()); Trace.WriteLine("browser size: " + browserSize.ToString()); } /// <summary> /// Get the enum value from the string /// </summary> /// <param name="browserType">Chrome, Firefox, or IE</param> /// <returns>The browser type.</returns> private BrowserType ParseBrowserType(string browserType) { return (UITestFramework.BrowserType)Enum.Parse(typeof(UITestFramework.BrowserType), browserType, true); } /// <summary> /// Get the browser size enum value from the string /// </summary> /// <param name="browserSize">Small, Medium, Large</param> /// <returns>the browser size</returns> private BrowserSizeEnum ParseBrowserSize(string browserSize) { return (BrowserSizeEnum)Enum.Parse(typeof(BrowserSizeEnum), browserSize, true); }/// <summary> /// Change the browser to the size based on the enum. /// </summary> /// <param name="browserSize">The BrowserSizeEnum value to resize the window to.</param> private void ResizeBrowser(BrowserSizeEnum browserSize) { switch (browserSize) { case BrowserSizeEnum.Large: this.driver.Manage().Window.Maximize(); break; case BrowserSizeEnum.Medium: this.driver.Manage().Window.Size = new Size(800, this.driver.Manage().Window.Size.Height); break; case BrowserSizeEnum.Small: this.driver.Manage().Window.Size = new Size(500, this.driver.Manage().Window.Size.Height); break; default: break; } }/// <summary> /// Browser sizes for automation testing /// </summary> public enum BrowserSizeEnum { /// <summary> /// Large size, Maximized to the desktop /// </summary> Large, /// <summary> /// Similar to tablets /// </summary> Medium, /// <summary> /// Phone sizes... 610px and smaller /// </summary> Small } Hope it helps!

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

  • How to generate tests with different names in testng, java?

    - by Ula Karzelek
    I'm using testng to run selenium based tests in java. I have a bunch of repeated tests. Generally they do all the same except of test name and one parameter. I want to automate generation of it. I was thinking about using factory. Is there a way to generate tests with different name? What would be the best approach to this? As for now I have something like below and I want to create 10 tests like LinkOfInterestIsActiveAfterClick @Test(dependsOnGroups="loggedin") public class SmokeTest extends BrowserStartingStoping{ public void LinkOfInterestIsActiveAfterClick(){ String link = "link_of_interest"; browser.click("*",link); Assert.assertTrue(browser.isLinkActive(link)); } } My xml suite is auto-generated from java code. test names are crucial for logging which link is active, and which one is not.

    Read the article

  • Writing selenium tests, should I just get it done or get it right?

    - by Peter Smith
    I'm attempting to drive my user interface (heavy on javascript) through selenium. I've already tested the rest of my ajax interaction with selenium successfully. However, this one particular method seems to be eluding me because I can't seem to fake the correct click event. I could solve this problem by simply waiting in the test for the user to click a point and then continuing with the test but this seems like a cop out. But I'm really running out of time on my deadline to have this done and working. Should I just get this done and move on or should I spend the extra (unknown) amount of time to fix this problem and be able to have my selenium tests 100% automated?

    Read the article

  • How do I pass tests with higher scores? [closed]

    - by user1867842
    How do I pass a test of programming knowledge for a higher score on oDesk.com? I have passed php and javascript tests but I have passed them with low scores and barley passing. This doesn't look too appealing for clients and I'm afraid that is the reason I am not being hired for a job. I know I am capable of doing web work and such. But I haven't been accepted for an interview or anything. Any idea how to study for something like this ?

    Read the article

  • Les tests d'interopérabilités sur HTTP 2.0 bientôt amorcés, la navigation internet sera plus rapide grâce au multiplexage

    Les tests d'interopérabilités sur HTTP 2.0 bientôt amorcés, la navigation internet sera plus rapide grâce au multiplexageLe web est susceptible de changer fondamentalement bientôt. L'IETF (Internet Engineering Task Force) prépare la version 2.0 du protocole HTTP, a expliqué Mark Nottingham qui dirige actuellement l'équipe d'IETF HTTPBIS. Pour sa première ébauche, HTTP 2.0 s'appuiera sur SPDY, le protocole développé par Google. Le responsable de l'équipe de travail prévoit toutefois des changements significatifs. Parmi eux, un nouvel algorithme de compression d'en-tête. Il faut aussi noter que bien que SPDY n'autorise que les connexions chiffrées, pour HTTP 2.0 elles seront facultatives. HTTP 2.0...

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >