Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 106/1421 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • FreeBSD ZFS RAID-Z2 performance issues

    - by Axel Gneiting
    I'm trying to build my own network attached storage based on FreeBSD+ZFS+standard components, but there are strange performance issues. The hardware specs are: AMD Athlon II X2 240e processor ASUS M4A78LT-M LE mainboard 2GiB Kingston ECC DDR3 (two sticks) Intel Pro/1000 CT PCIe network adapter 5x Western Digital Caviar Green 1.5TB I created a RAID-Z2 zpool from all disks. I installed FreeBSD 8.1 on that zpool following the tutorial. The SATA controllers are running in AHCI mode. Output of zpool status: pool: zroot state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz2 ONLINE 0 0 0 gptid/7ef815fc-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/80344432-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/81741ad9-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/824af5cb-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 gptid/82f98a65-eab6-11df-8ea4-001b2163266d ONLINE 0 0 0 The problem is that write performance on the pool is very very bad (<10 MB/s) and every application that is accessing the disk is unresponsive every few seconds when writing. It seems like writing is fine until the ZFS ark cache is full and then ZFS stalls the entire system I/O till it's finished writing that data. Also I'm getting kmem_malloc to small kernel panics. I've already tried to put vm.kmem_size="1500M" vm.kmem_size_max="1500M" into /boot/loader.conf, but it doesn't help. Does anyone know what's going on here? Am I really not having enough memory for ZFS to handle this RAID-Z2?

    Read the article

  • Help on Website response time KPI parameters

    - by geeth
    I am working on improving website performance. Here are the list of key performance indicators I am looking at for each page Total Bytes downloaded Number of requests DNS look up time FirstByte Download time DOM content load time Total load time Is there any optimum value for each KPI to indicate website performance? Please help me in this regard.

    Read the article

  • ReSharper wrapping test result stacktraces?

    - by lance
    ReSharper 4.5's test runner will run MSTest tests out of the box, and that's what I'm doing. When a test fails, I click on the test to see the stacktrace and the failure reason. The pane I'm clicking in to do this is the "Unit Test Sessions" pane. The lower half of this pane (or the right half, if you have it configured that way) shows the reason and stacktrace for the failure. This section/pane does not word wrap, so I have to use the mouse to scroll left and right all the time. How can I make this reason/stacktrace pane word wrap?

    Read the article

  • Configurying load test in visual studio

    - by Sergej Andrejev
    I'm creating a load test in visual studio. Could somebody help me choose the settings right. What I want is to test application with 1 user then give it a rest for two minutes, then test with 4 and so on. Also after resting for 2 minutes I would like user load increased gradually.

    Read the article

  • Writing a simple RSpec test to check that Rake tasks are correct

    - by John Feminella
    I'm trying to be diligent about checking my rake tasks with RSpec tests, but in the process of feeling my way around I seem to have hit a wall. I've got a really simple RSpec test that looks like this: # ./test/meta_spec.rb describe "Rake tasks" do require 'rake' before(:each) do @rake = Rake::Application.new @rake.load_rakefile # => Error here! Rake.application = @rake end after(:each) do Rake.application = nil end it "should have at least one RSpec test to execute" do Rake.application["specs"].spec_files.size.should > 0 end end I have a simple task called "specs" defined in ./Rakefile.rb which has an RSpec task that includes all the *_spec.rb files. If I put the @rake.load_rakefile method in, I want that Rakefile to load. But instead it just bombs out. If I comment it out, however, the test fails because the "specs" task is (understandably) not defined. Where am I going wrong?

    Read the article

  • Web Performance testing using VS2010 "Testing a file download"

    - by cheedep
    Hi All, I am trying out the VS 2010 testing tools for the first time. And I tried recording a web performance test and my actions had a file download implemented as in the KB article here http://support.microsoft.com/kb/812406 by streaming chunks of 10000 bytes. However my test is failing at the download saying "The response stream has been closed". Please help me understand why it is happening this way also any suggestions how you would test such a file download. My main aim was to see how the download was performing for a load test with Intercontinental 350kbps connection on files of about 30-50 MB. Thanks.

    Read the article

  • Recording object method calls to generate automated test.

    - by Constantin
    I have an object with large interface and I would like to record all its method calls done during a user session. Ideally this sequence of calls would be available as source code: myobj.MethodA(42); myobj.MethodB("spam", false); ... I would then convert this code to a test case to have a kind of automated smoke/load test. WCF Load Test can do this for WCF services and CodedUI test recorder can do this for UIs. What are my options for a POCO class? I am in position to edit application code and replace the object in question with some recording/forwarding proxy.

    Read the article

  • Rails "rake test" crashing

    - by Homer J. Simpson
    Hi, this might be rather unspecific, but I'm trying to do 'rake test' on a new rails app, and end up with (in /Users/myname/dev/railstest/RailsApplication1) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -I"lib:test" "/Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" No other output. System is Leopard 10.5, Rails 2.3.5, Ruby 1.86 Any ideas ?

    Read the article

  • ORM solutions (JPA; Hibernate) vs. JDBC

    - by Grasper
    I need to be able to insert/update objects at a consistent rate of at least 8000 objects every 5 seconds in an in-memory HSQL database. I have done some comparison performance testing between Spring/Hibernate/JPA and pure JDBC. I have found a significant difference in performance using HSQL.. With Spring/Hib/JPA, I can insert 3000-4000 of my 1.5 KB objects (with a One-Many and a Many-Many relationship) in 5 seconds, while with direct JDBC calls I can insert 10,000-12,000 of those same objects. I cannot figure out why there is such a huge discrepancy. I have tweaked the Spring/Hib/JPA settings a lot trying to get close in performance without luck. I want to use Spring/Hib/JPA for future purposes, expandability, and because the foreign key relationships (one-many and many-many) are difficult to maintain by hand; but the performance requirements seem to point towards using pure JDBC. Any ideas of why there would be such a huge discrepancy?

    Read the article

  • Fleunt NHibernate not working outside of nunit test fixtures

    - by thorkia
    Okay, here is my problem... I created a Data Layer using the RTM Fluent Nhibernate. My create session code looks like this: _session = Fluently.Configure(). Database(SQLiteConfiguration.Standard.UsingFile("Data.s3db")) .Mappings( m => { m.FluentMappings.AddFromAssemblyOf<ProductMap>(); m.FluentMappings.AddFromAssemblyOf<ProductLogMap>(); }) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); When I reference the module in a test project, then create a test fixture that looks something like this: [Test] public void CanAddProduct() { var product = new Product {Code = "9", Name = "Test 9"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); using (ISession session = OrmHelper.OpenSession()) { var fromDb = session.Get<Product>(product.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(fromDb, product); Assert.AreEqual(fromDb.Id, product.Id); } My tests pass. When I open up the created SQLite DB, the new Product with Code 9 is in it. the tables for Product and ProductLog are there. Now, when I create a new console application, and reference the same library, do something like this: Product product = new Product() {Code = "10", Name = "Hello"}; IProductRepository repository = new ProductRepository(); repository.AddProduct(product); Console.WriteLine(product.Id); Console.ReadLine(); It doesn't work. I actually get pretty nasty exception chain. To save you lots of head aches, here is the summary: Top Level exception: An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.\r\n\r\n The PotentialReasons collection is empty The Inner exception: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use element in the application configuration file to specify the full name of the assembly. Both the unit test library and the console application reference the exact same version of System.Data.SQLite. Both projects have the exact same DLLs in the debug folder. I even tried copying SQLite DB the unit test library created into the debug directory of the console app, and removed the build schema lines and it still fails If anyone can help me figure out why this won't work outside of my unit tests it would be greatly appreciated. This crazy bug has me at a stand still.

    Read the article

  • how to test or describe endless possibilities?

    - by koen
    Example class in pseudocode: class SumCalculator method calculate(int1, int2) returns int What is a good way to test this? In other words how should I describe the behavior I need? test1: canDetermineSumOfTwoIntegers or test2: returnsSumOfTwoIntegers or test3: knowsFivePlusThreeIsEight Test1 and Test2 seem vague and it would need to test a specific calculation, so it doesn't really describe what is being tested. Yet test3 is very limited. What is a good way to test such classes?

    Read the article

  • Symfony - Delete and reload all database records for each test

    - by Andree
    Hi there, From the Jobeet tutorial provided in Symfony website, I found out that I can load fixtures data each time I run unit test by using this script: Doctrine_Core::loadData(sfConfig::get('sf_test_dir').'/fixtures'); However, I want to delete and reload all records from database each time I run a unit test. Currently, I'm doing this manually (Run symfony doctrine:build --all before each test). Can someone provided me a correct way to do this?

    Read the article

  • Need help trying to diagnose Symmetrix SAN performance issues

    - by arcain
    I am helping to benchmark hardware for a new SQL Server instance, and the volume presented to the OS for the data files is carved from a set of spindles on a Symmetrix SAN. The server has yet to have SQL Server installed, so the only activity on the box is our benchmarking. Now, our storage engineers say that this volume and it's resources are dedicated to our new server (I don't have access to see the actual SAN config) however the performance benchmarks are troubling. For example, the numbers look good until suddenly, and randomly, we see in our IO benchmarking tool wait times of 100 seconds, and disk queue lengths of 255 in perfmon. This SAN has an 8 GB cache, plus there are other applications besides ours that use the SAN. I'm wondering if (even though the spindles for our volumes should be dedicated to us) the cache may be getting hammered during the performance testing, or perhaps the spindles our volumes are on aren't really dedicated to us. We're not getting much traction from our storage engineers in helping us track down the problem, so if anybody has experience with diagnosing a problem like this and would like to share insights and troubleshooting methodologies, I'd appreciate it.

    Read the article

  • I am having issues with django test

    - by Mohamed
    I have this test case def test_loginin_student_control_panel(self): c = Client() c.login(username="tauri", password="gaul") response = c.get('/student/') self.assertEqual(response.status_code, 200) the view associated with the test case is this @login_required def student(request): return render_to_response('student/controlpanel.html') so my question is why the above test case redirects user to login page? should not c.login suppose to take care authenticating user?

    Read the article

  • Test whether a glob has any matches in bash

    - by Ken Bloom
    If I want to check for the existance of a single file, I can test for it using test -e filename or [ -e filename ]. Supposing I have a glob and I want to know whether any files exist whose names match the glob. The glob can match 0 files (in which case I need to do nothing), or it can match 1 or more files (in which case I need to do something). How can I test whether a glob has any matches.? (test -f glob* fails if the glob matches more than one file.)

    Read the article

  • python threading and performace?

    - by kumar
    I had to do heavy I/o bound operation, i.e Parsing large files and converting from one format to other format. Initially I used to do it serially, i.e parsing one after another..! Performance was very poor ( it used take 90+ seconds). So I decided to use threading to improve the performance. I created one thread for each file. ( 4 threads) for file in file_list: t=threading.Thread(target = self.convertfile,args = file) t.start() ts.append(t) for t in ts: t.join() But for my astonishment, there is no performance improvement whatsoever. Now also it takes around 90+ seconds to complete the task. As this is I/o bound operation , I had expected to improve the performance. What am I doing wrong?

    Read the article

  • Fitnesse - multiple tests but only the last test being executed

    - by simon_bellis
    I have a Fitnesse test that I want to run twice. Once in firefox and once in ie. The test is below. The problem I am having is that only the second test is being executed by fitnesse !define COMMAND_PATTERN {%m %p} !define TEST_RUNNER {dotnet2\FitServer.exe} !****>Global Variables !define testUrl {http://localhost:1516/Web.App/Login.aspx} *****! !define browserToUse (IE) !include -c -seamless .FrontPage.LoginTests !define browserToUse (FireFox) !include -c -seamless .FrontPage.LoginTests

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • Unit Testing Private Method in Resource Managing Class (C++)

    - by BillyONeal
    I previously asked this question under another name but deleted it because I didn't explain it very well. Let's say I have a class which manages a file. Let's say that this class treats the file as having a specific file format, and contains methods to perform operations on this file: class Foo { std::wstring fileName_; public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it //Long method to figure out what checksum it is. //Return the checksum. } }; Let's say I'd like to be able to unit test the part of this class that calculates the checksum. Unit testing the parts of the class that load in the file and such is impractical, because to test every part of the getChecksum() method I might need to construct 40 or 50 files! Now lets say I'd like to reuse the checksum method elsewhere in the class. I extract the method so that it now looks like this: class Foo { std::wstring fileName_; static int calculateChecksum(const std::vector<unsigned char> &fileBytes) { //Long method to figure out what checksum it is. } public: Foo(const std::wstring& fileName) : fileName_(fileName) { //Construct a Foo here. }; int getChecksum() { //Open the file and read some part of it return calculateChecksum( something ); } void modifyThisFileSomehow() { //Perform modification int newChecksum = calculateChecksum( something ); //Apply the newChecksum to the file } }; Now I'd like to unit test the calculateChecksum() method because it's easy to test and complicated, and I don't care about unit testing getChecksum() because it's simple and very difficult to test. But I can't test calculateChecksum() directly because it is private. Does anyone know of a solution to this problem?

    Read the article

  • Need help with writing test

    - by London
    I'm trying to write a test for this class its called Receiver : public void get(People person) { if(null != person) { LOG.info("Person with ID " + person.getId() + " received"); processor.process(person); }else{ LOG.info("Person not received abort!"); } } Here is the test : @Test public void testReceivePerson(){ context.checking(new Expectations() {{ receiver.get(person); atLeast(1).of(person).getId(); will(returnValue(String.class)); }}); } Note: receiver is the instance of Receiver class(real not mock), processor is the instance of Processor class(real not mock) which processes the person(mock object of People class). GetId is a String not int method that is not mistake. Test fails : unexpected invocation of person.getId() I'm using jMock any help would be appreciated. As I understood when I call this get method to execute it properly I need to mock person.getId() , and I've been sniping around in circles for a while now any help would be appreciated.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • C++ Mock/Test boost::asio::io_stream - based Asynch Handler

    - by rbellamy
    I've recently returned to C/C++ after years of C#. During those years I've found the value of Mocking and Unit testing. Finding resources for Mocks and Units tests in C# is trivial. WRT Mocking, not so much with C++. I would like some guidance on what others do to mock and test Asynch io_service handlers with boost. For instance, in C# I would use a MemoryStream to mock an IO.Stream, and am assuming this is the path I should take here. C++ Mock/Test best practices boost::asio::io_service Mock/Test best practices C++ Async Handler Mock/Test best practices I've started the process with googlemock and googletest.

    Read the article

  • Best way to test a batch email script

    - by John
    I have a php mail script sitting on a LAMP vps server. The script grabs about 1000 emails and sends them each an html email. I tested the script with about half a dozen of my own test email accounts and things worked fine. But I am concerned something may go wrong when I actually use this script for 1000 emails. Some things I would like to test for are 1) Confirm all 1000 emails were sent and received 2) Test to make sure emails did not end up in people's spam folders 3) Detect any other general failures Does anyone have suggestions on how I can test for the above cases? I would like to read about your experiences building batch email scripts.

    Read the article

  • How does TopCoder evaluates code?

    - by Carlos
    If you are familiar with TopCoder you know that your source-code gets a final "grade/points" this depends on time, how many compiles, etc, one of the highest weighted being performance. But how can they test that, is there some sort of simple code (java or c++) to do it that you could share for me to evaluate and hopefully write my own to test the programs I write for University? This is sort of a follow up question to this one where I ask if shorter code results in best performance. P.S: Im interested in both of how topcoders knows performance and writing code to test performance.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >