Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 122/456 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Recipe for creating a corrupt mysql table

    - by Chaim Geretz
    We had a process that crashed while trying to manipulate an expected mysql record set, running the offending query from the mysql cli showed the following. mysql SELECT ...; ERROR 1030: Got error 127 from table handler Is there a way to easily recreate this condition so we can validate our fix ? (production DB was already repaired).

    Read the article

  • How can I measure file access performance (and volume) of a (Java) application

    - by stmoebius
    Given an application, how can I measure the amount of data read and written by that application? the time spent reading/writing to disk? The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64. The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given. File access may be local as well as on network shares. Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation). Any ideas?

    Read the article

  • How do I get the name of the test method that was ran in a testng tear down method?

    - by Zachary Spencer
    Basically I have a tear down method that I want to log to the console which test was just ran. How would I go about getting that string? I can get the class name, but I want the actual method that was just executed. Class testSomething() { @AfterMethod public void tearDown() { system.out.println('The test that just ran was....' + getTestThatJustRanMethodName()'); } @Test public void testCase() { assertTrue(1==1); } } should output to the screen: "The test that just ran was.... testCase" However I don't know the magic that getTestThatJustRanMethodName should actually be.

    Read the article

  • How do I fix my Unit Test to have global access to everything?

    - by SLC
    Usually when you add one (in Visual Basic), it pops up a message asking if you want to enable an option that lets the test access things like private methods etc. However, I am editing a solution that does not have this enabled. I'd like to enable it so my unit tests will work, but I can't find the setting. Can anyone tell me how to enable it after the project has been created?

    Read the article

  • Not Able to call The method Asynchronously in the Unit Test.

    - by user43838
    Hi everyone, I am trying to call a method that passes an object called parameters. public void LoadingDataLockFunctionalityTest() { DataCache_Accessor target = DataCacheTest.getNewDataCacheInstance(); target.itemsLoading.Add("WebFx.Caching.TestDataRetrieverFactorytestsync", true); DataParameters parameters = new DataParameters("WebFx.Core", "WebFx.Caching.TestDataRetrieverFactory", "testsync"); parameters.CachingStrategy = CachingStrategy.TimerDontWait; parameters.CacheDuration = 0; string data = (string)target.performGetForTimerDontWaitStrategy(parameters); TestSyncDataRetriever.SimulateLoadingForFiveSeconds = true; Thread t1 = new Thread(delegate() { string s = (string)target.performGetForTimerDontWaitStrategy(parameters); Console.WriteLine(s ?? String.Empty); }); t1.Start(); t1.Join(); Thread.Sleep(1000); ReaderWriterLockSlim rw = DataCache_Accessor.GetLoadingLock(parameters); Assert.IsTrue(rw.IsWriteLockHeld); Assert.IsNotNull(data); } My test is failing all the time and i am not able step through the method.. Can someone please put me in the right direction Thanks

    Read the article

  • C# why unit test has this strange behaviour?

    - by 5YrsLaterDBA
    I have a class to encrypt the connectionString. public class SKM { private string connStrName = "AndeDBEntities"; internal void encryptConnStr() { if(isConnStrEncrypted()) return; ... } private bool isConnStrEncrypted() { bool status = false; // Open app.config of executable. System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // Get the connection string from the app.config file. string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; status = !(connStr.Contains("provider")); Log.logItem(LogType.DebugDevelopment, "isConnStrEncrypted", "SKM::isConnStrEncrypted()", "isConnStrEncrypted=" + status); return status; } } Above code works fine in my application. But not in my unit test project. In my unit test project, I test the encryptConnStr() method. it will call isConnStrEncrypted() method. Then exception (null pointer) will be thrown at this line: string connStr = config.ConnectionStrings.ConnectionStrings[connStrName].ConnectionString; I have to use index like this to pass the unit test: string connStr = config.ConnectionStrings.ConnectionStrings[0].ConnectionString; I remember it worked several days ago at the time I added above unit test. But now it give me an error. The unit test is not integrated with our daily auto build yet. We only have ONE connectionStr. It works with product but not in unit test. Don't know why. Anybody can explain to me?

    Read the article

  • Method parameters have incorrect values when using RowTest in VB.Net

    - by simon_bellis
    Hello, I have the following test method (VB.NET) <RowTest()> _ <Row(1, 2, 3)> _ Public Sub AddMultipleNumbers(ByVal number1 As Integer, ByVal number2 As Integer, ByVal result As Integer) Dim dvbc As VbClass = New VbClass() Dim actual As Integer = dvbc.Add(number1, number2) Assert.That(actual, [Is].SameAs(result)) End Sub My problem is that when the test runs, using TestDriven.Net, the three method parameters are 0 and not the values I am expecting. I have referenced the NUnit.Framework (v.2.5.3.9345) anf the NUnitExtension.RowTest (v.1.2.3.0).

    Read the article

  • Build model with nested model in rspec integration test

    - by user1116573
    I understand that I can do something like in rspec: let(:project) { Project.new } but in my app a project accepts_nested_attributes_for tasks and when I generate the Project form I build a task along with it using: @project = Project.new @project.tasks.build I need something like: let(:project) { Project.new.tasks.build } but that doesn't seem to work. How can I do this as a let in my rspec test?

    Read the article

  • Need help mocking a ASP.NET Controller in RhinoMocks

    - by Pure.Krome
    Hi folks, I'm trying to mock up a fake ASP.NET Controller. I don't have any concrete controllers, so I was hoping to just mock a Controller and it will work. This is what I have, currently. _fakeRequestBase = MockRepository.GenerateMock<HttpRequestBase>(); _fakeRequestBase.Stub(x => x.HttpMethod).Return("GET"); _fakeContextBase = MockRepository.GenerateMock<HttpContextBase>(); _fakeContextBase.Stub(x => x.Request).Return(_fakeRequestBase); var controllerContext = new ControllerContext(_fakeContextBase, new RouteData(), MockRepository.GenerateMock<ControllerBase>()); _fakeController = MockRepository.GenerateMock<Controller>(); _fakeController.Stub(x => x.ControllerContext).Return(controllerContext); Everything works except the last line, which throws a runtime error and is asking me for some Rhino.Mocks source code or something (which I don't have). See how I'm trying to mock up an abstract Controller - is that allowed? Can someone help me?

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

  • How to run tests in plugins?

    - by Daniel Engmann
    We have splitted our grails application into several inplace-plugins. Now we want to have the tests in the same plugin like the classes which they test. Is it possible to configure our application (e.g. in BuildConfig.groovy) so that the tests in the plugins are executed too when we run "test-app"?

    Read the article

  • How can I know when SQL Full Text Index Population is finished?

    - by GarethOwen
    We are writing unit tests for our ASP.NET application that run against a test SQL Server database. That is, the ClassInitialize method creates a new database with test data, and the ClassCleanup deletes the database. We do this by running .bat scripts from code. The classes under test are given a connection string that connects to the unit test database rather than a production database. Our problem is, that the database contains a full text index, which needs to be fully populated with the test data in order for our tests to run as expected. As far as I can tell, the fulltext index is always populated in the background. I would like to be able to either: Create the full text index, fully populated, with a synchronous (transact-SQL?) statement, or Find out when the fulltext population is finished, is there a callback option, or can I ask repeatedly? My current solution is to force a delay at the end the class initialize method - 5 seconds seems to work - because I can't find anything in the documentation.

    Read the article

  • Why junit ComparisonFailure is not used by assertEquals(Object, Object) ?

    - by Philippe Blayo
    In Junit 4, do you see any drawback to throw a ComparisonFailure instead of an AssertionError when assertEquals(Object, Object) fails ? assertEquals(Object, Object) throws a ComparisonFailure if both expected and actual are String an AssertionError if either is not a String @Test(expected=ComparisonFailure.class ) public void twoString() { assertEquals("a String", "another String"); } @Test(expected=AssertionError.class ) public void oneString() { assertEquals("a String", new Object()); } The two reasons why I ask the question: ComparisonFailure provide far more readable way to spot the differences in dialog box of eclipse or Intellij IDEA (FEST-Assert throws this exception) Junit 4 already use String.valueOf(Object) to build message "expected ... but was ..." (format method invoqued by Assert.assertEquals(message, Object, Object) in junit-4.8.2): static String format(String message, Object expected, Object actual) { ... String expectedString= String.valueOf(expected); String actualString= String.valueOf(actual); if (expectedString.equals(actualString)) return formatted + "expected: " + formatClassAndValue(expected, expectedString) +" but was: " + formatClassAndValue(actual, actualString); else return formatted +"expected:<"+ expectedString +"> but was:<"+ actualString +">"; Isn't it possible in assertEquals(message, Object, Object) to replace fail(format(message, expected, actual)); by throw new ComparisonFailure(message, formatClassAndValue(expectedObject, expectedString), formatClassAndValue(actualObject, actualString)); Do you see any compatibility issue with other tool, any algorithmic problem with that... ?

    Read the article

  • Is it possible to talk to the iPhone simulator/device.

    - by Plumenator
    I need to automate the build/deploy process for my iphone applications from a script. I can use xcodebuild to build the project, then use Applescript to deploy and debug/run the application. Assuming the application will stop by itself after a while, I need to collect the generated logs for verification. But the problem is I have no way to know when the application ended from outside of the application itself. If the running time is fixed, I can again use Applescript to stop the application (Cmd+Shift+Enter). So there has to be a way to connect to the device/simulator and wait on the application somehow.

    Read the article

  • how many lines of code does my class library have

    - by zachary
    for metrics reasons I need to know how many lines of code my class library has. I'm doing this for code coverage.... So if Class library 1 has 50 lines of code and 100% coverage And if Class library 2 has 500 lines of code and 0% coverage My total coverage is 90% Any idea how to do this? Is there a utility or a way to use Visual Studio?

    Read the article

  • How to ignore a test within the JUnit test method itself

    - by Benju
    We have a number of integration tests that fail when our staging server goes down for weekly maintenance. When the staging server is down we send a specific response that I could detect in my integration tests. When I get this response instead of failing the tests I'm wondering if it is possible to skip/ignore that test even though it has started running. This would keep our test reports a bit cleaner. Does anybody have suggestions?

    Read the article

  • mysterious difference between rake test and ruby

    - by standup75
    Here is the mysterious: I have a scope which looks like this (in Image.rb) scope :moderate_all, delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}") Note that delegates is another scope that I am defining before moderate_all When I leave it like this, I can run my test that checks if an image has been "checked-out" it is not available anymore. I don't put the code of the test, because it does not matter actually. With this code, when I run "rake test" it fails, but if I do "ruby test/unit/image_test.rb" it works! I was thinking I am starting to have a bad day. Then I tried scope :moderate_all, lambda { delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}") } And "rake test" passes! So my problem is solved, but why?

    Read the article

  • Dependency injection in C++

    - by Yorgos Pagles
    This is also a question that I asked in a comment in one of Miško Hevery's google talks that was dealing with dependency injection but it got buried in the comments. I wonder how can the factory / builder step of wiring the dependencies together can work in C++. I.e. we have a class A that depends on B. The builder will allocate B in the heap, pass a pointer to B in A's constructor while also allocating in the heap and return a pointer to A. Who cleans up afterwards? Is it good to let the builder clean up after it's done? It seems to be the correct method since in the talk it says that the builder should setup objects that are expected to have the same lifetime or at least the dependencies have longer lifetime (I also have a question on that). What I mean in code: class builder { public: builder() : m_ClassA(NULL),m_ClassB(NULL) { } ~builder() { if (m_ClassB) { delete m_ClassB; } if (m_ClassA) { delete m_ClassA; } } ClassA *build() { m_ClassB = new class B; m_ClassA = new class A(m_ClassB); return m_ClassA; } }; Now if there is a dependency that is expected to last longer than the lifetime of the object we are injecting it into (say ClassC is that dependency) I understand that we should change the build method to something like: ClassA *builder::build(ClassC *classC) { m_ClassB = new class B; m_ClassA = new class A(m_ClassB, classC); return m_ClassA; } What is your preferred approach?

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >