Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 248/883 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • KD-Trees and missing values (vector comparison)

    - by labratmatt
    I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored. I have a couple of obvious solutions, but I'm not sure either one seem very good: Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z. Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product. This has to be a common problem, any suggestions? Thanks for the help.

    Read the article

  • Is it a good idea to mock/stub in integration tests?

    - by ez
    Say there are multiple requests in a integration test, some of them are sphinx calls(locator for example). Should we just stub out the entire response of these sphinx call, or, since it is a integration test, we want to excise the entire test without stubbing. If that is the case, how do we still keep test independent in the situation when sphinx fails, no internet connection, or third party server non-responsive. Give reasons. Thanks

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • Website stress test in Python - Django

    - by RadiantHex
    Hi folks, I'm trying to build a small stress test script to test how quickly a set of requests gets done. Need to measure speed for 100 requests. Problem is that I wouldn't know how to implement it, as it would require parallel url requests to be called. Any ideas?

    Read the article

  • UnitTest ExpectedException with multiple Exceptions

    - by masterchris_99
    I want one TestMethod for multiple exceptions. The Problem ist that the Testmethod stops after the first thrown exception. I know that I can do something like that: try { sAbc.ToInteger(); Assert.Fail(); // If it gets to this line, no exception was thrown } catch (ArgumentException) { } But I want to use the following code-base: [TestMethod, ExpectedException(typeof(ArgumentException), "...")] public void StringToIntException() { sAbc.ToInteger(); // throws an exception and stops here sDecimal.ToInteger(); // throws theoretically a exception too... } And I don't want to create one testmethod for each possible exception like that: [TestMethod, ExpectedException(typeof(ArgumentException), "...")] public void StringToIntException() { sAbc.ToInteger(); } [TestMethod, ExpectedException(typeof(ArgumentException), "...")] public void StringToIntException() { sDecimal.ToInteger(); }

    Read the article

  • Sensible unit test possible?

    - by nkr1pt
    Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists? I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code. I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed. How would you tackle this? public class Extractor { private EventBus eventBus; private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()}; private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{}; private ExtractCommand[] macExtractCommands = new ExtractCommand[]{}; @Inject public Extractor(EventBus eventBus) { this.eventBus = eventBus; } public boolean extract(DownloadCandidate downloadCandidate) { for (ExtractCommand command : getSystemSpecificExtractCommands()) { if (command.extract(downloadCandidate)) { eventBus.fireEvent(this, new ExtractCompletedEvent()); return true; } } eventBus.fireEvent(this, new ExtractFailedEvent()); return false; } private ExtractCommand[] getSystemSpecificExtractCommands() { String os = System.getProperty("os.name"); if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return linuxExtractCommands; } else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return windowsExtractCommands; } else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return macExtractCommands; } return null; } }

    Read the article

  • Weird .net 4.0 exception when running unit tests

    - by vdh_ant
    Hi guys I am receiving the following exception when trying to run my unit tests using .net 4.0 under VS2010 with moq 3.1. Attempt by security transparent method 'SPPD.Backend.DataAccess.Test.Specs_for_Core.When_using_base.Can_create_mapper()' to access security critical method 'Microsoft.VisualStudio.TestTools.UnitTesting.Assert.IsNotNull(System.Object)' failed. Assembly 'SPPD.Backend.DataAccess.Test, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is marked with the AllowPartiallyTrustedCallersAttribute, and uses the level 2 security transparency model. Level 2 transparency causes all methods in AllowPartiallyTrustedCallers assemblies to become security transparent by default, which may be the cause of this exception. The test I am running is really straight forward and looks something like the following: [TestMethod] public void Can_create_mapper() { this.SetupTest(); var mockMapper = new Moq.Mock<IMapper>().Object; this._Resolver.Setup(x => x.Resolve<IMapper>()).Returns(mockMapper).Verifiable(); var testBaseDa = new TestBaseDa(); var result = testBaseDa.TestCreateMapper<IMapper>(); Assert.IsNotNull(result); //<<< THROWS EXCEPTION HERE Assert.AreSame(mockMapper, result); this._Resolver.Verify(); } I have no idea what this means and I have been looking around and have found very little on the topic. The closest reference I have found is this http://dotnetzip.codeplex.com/Thread/View.aspx?ThreadId=80274 but its not very clear on what they did to fix it... Anyone got any ideas?

    Read the article

  • Perl unit test - start a tcp server & continue

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the tcp server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call "make test" after "perl Makefile.PL", this test starts & I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the "&" at the end to force it to start in background & then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • Difficulty thinking of properties for FsCheck

    - by Benjol
    I've managed to get xUnit working on my little sample assembly. Now I want to see if I can grok FsCheck too. My problem is that I'm stumped when it comes to defining test properties for my functions. Maybe I've just not got a good sample set of functions, but what would be good test properties for these functions, for example? //transforms [1;2;3;4] into [(1,2);(3,4)] pairs : 'a list -> ('a * 'a) list //' //splits list into list of lists when predicate returns // true for adjacent elements splitOn : ('a -> 'a -> bool) -> 'a list -> 'a list list //returns true if snd is bigger sndBigger : ('a * 'a) -> bool (requires comparison)

    Read the article

  • How to efficiently show many Images? (iPhone programming)

    - by Thomas
    In my application I needed something like a particle system so I did the following: While the application initializes I load a UIImage laserImage = [UIImage imageNamed:@"laser.png"]; UIImage *laserImage is declared in the Interface of my Controller. Now every time I need a new particle this code makes one: // add new Laserimage UIImageView *newLaser = [[UIImageView alloc] initWithImage:laserImage]; [newLaser setTag:[model.lasers count]-9]; [newLaser setBounds:CGRectMake(0, 0, 17, 1)]; [newLaser setOpaque:YES]; [self.view addSubview:newLaser]; [newLaser release]; Please notice that the images are only 17px * 1px small and model.lasers is a internal array to do all the calculating seperated from graphical output. So in my main drawing loop I set all the UIImageView's positions to the calculated positions in my model.lasers array: for (int i = 0; i < [model.lasers count]; i++) { [[self.view viewWithTag:i+10] setCenter:[[model.lasers objectAtIndex:i] pos]]; } I incremented the tags by 10 because the default is 0 and I don't want to move all the views with the default tag. So the animation looks fine with about 10 - 20 images but really gets slow when working with about 60 images. So my question is: Is there any way to optimize this without starting over in OpenGl ES? Thank you very much and sorry for my english! Greetings from Germany, Thomas

    Read the article

  • How do I pipe the Java console output to a file?

    - by Ced
    I found a bug in an application that completely freezes the JVM. The produced stacktrace would provide valuable information for the developers and I would like to retrieve it from the Java console. When the JVM crashes, the console is frozen and I cannot copy the contained text anymore. Is there way to pipe the Java console directly to a file or some other means of accessing the console output of a Java application? Update: I forgot to mention, without changing the code. I am a manual tester. Update 2: This is under Windows XP and it's actually a web start application. Piping the output of javaws jnlp-url does not work (empty file).

    Read the article

  • moqing static method call to c# library class

    - by Joe
    This seems like an easy enough issue but I can't seem to find the keywords to effect my searches. I'm trying to unit test by mocking out all objects within this method call. I am able to do so to all of my own creations except for this one: public void MyFunc(MyVarClass myVar) { Image picture; ... picture = Image.FromStream(new MemoryStream(myVar.ImageStream)); ... } FromStream is a static call from the Image class (part of c#). So how can I refactor my code to mock this out because I really don't want to provide a image stream to the unit test.

    Read the article

  • @ContextConfiguration in Spring 3.0 give me No default constructor found

    - by atomsfat
    I have already do the test using AbstractDependencyInjectionSpringContextTests and it works but in spring 3 it is deprecated, so I decided to try @ContextConfiguration but spring say that default constructor is not found, I check and the class doesn't have any constructor. If I use this test spring give the object. package atoms.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import java.util.List; import javax.persistence.EntityManager; import org.springframework.test.AbstractDependencyInjectionSpringContextTests; /** * * @author tsalazar */ public class ClienteServiceImplDeTest extends AbstractDependencyInjectionSpringContextTests{ private ClienteService clienteService; public ClienteService getClienteService() { return clienteService; } public void setClienteService(ClienteService clienteService) { this.clienteService = clienteService; } public ClienteServiceImplDeTest(String testName) { super(testName); } @Override protected String[] getConfigLocations() { return new String[]{"PersistenceAppCtx.xml", "ServicesAppCtx.xml"}; } /** * Test of buscaCliente method, of class ClienteServiceImplDeTest. */ public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); String nombre = ""; System.out.println(clienteService); System.out.println("======================================="); } } But if I use this, spring say that default constructor is not found. package atoms.config.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import org.junit.runner.RunWith; import org.junit.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.transaction.TransactionConfiguration; import org.springframework.transaction.annotation.Transactional; /** * * @author tsalazar */ @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"/PersistenceAppCtx.xml", "/ServicesAppCtx.xml"}) @TransactionConfiguration(transactionManager = "transactionManager") @Transactional public class ClienteServiceImplTest { @Autowired private ClienteService clienteService; /** * Test of buscaCliente method, of class ClienteServiceImpl. */ @Test public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); System.out.println(clienteService); System.out.println("======================================="); } } This how I do the implementacion: package atoms.portales.servicios; import atoms.portales.model; /** * Una interface para obtener clientes, con sus surcursales, servicios, layouts * y contratos. Tambien soporta operaciones CRUD. * @author tsalazar */ public interface ClienteService { /** * Busca clientes a partir del nombre * @param nombre */ public Cliente buscaCliente(String nombre); } the implemetacion package atoms.portales..servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import org.springframework.stereotype.Repository; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; /** * A JPA-based implementation.Delegates to a JPA entity manager to issue data access calls * against the backing repository. The EntityManager reference is provided by the managing container (Spring) * automatically. */ @Service("clienteSerivice") @Repository public class ClienteServiceImpl implements ClienteService { public ClienteServiceImpl() { } private EntityManager em; @PersistenceContext public void setEntityManager(EntityManager em) { this.em = em; } @Transactional(readOnly = true) public Cliente buscaCliente(String nombre) { Cliente cliente = em.getReference(Cliente.class, 1l); return cliente; } } spring configuration: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <!-- Instructs Spring to perfrom declarative transaction management on annotated classes --> <tx:annotation-driven /> <!-- Drives transactions using local JPA APIs --> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <!-- Creates a EntityManagerFactory for use with the Hibernate JPA provider and a simple in-memory data source populated with test data --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" /> </property> </bean> <!-- Deploys a in-memory "booking" datasource populated --> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver" /> <property name="url" value="jdbc:hsqldb:hsql://localhost/test" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <context:component-scan base-package="atoms.portales.servicios" /> </beans> This is the persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="configuradorPortales" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>atoms.portales.model.Cliente</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider"/> </properties> </persistence-unit> </persistence> This is the error that give me:

    Read the article

  • Seeding repository Rhino Mocks

    - by ahsteele
    I am embarking upon my first journey of test driven development in C#. To get started I'm using MSTest and Rhino.Mocks. I am attempting to write my first unit tests against my ICustomerRepository. It seems tedious to new up a Customer for each test method. In ruby-on-rails I'd create a seed file and load the customer for each test. It seems logical that I could put this boiler plate Customer into a property of the test class but then I would run the risk of it being modified. What are my options for simplifying this code? [TestMethod] public class CustomerTests : TestClassBase { [TestMethod] public void CanGetCustomerById() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } [TestMethod] public void CanGetCustomerByDifId() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByDifID("55")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByDifID("55")); } [TestMethod] public void CanGetCustomerByLogin() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByLogin("tdude")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByLogin("tdude")); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • Get list of named queries in NHibernate

    - by Dan
    I have a dozen or so named queries in my NHibernate project and I want to execute them against a test database in unit tests to make sure the syntax still matches the changing domain/database model. Currently I have a unit test for each named query where I get and execute the query, for example: IQuery query = session.GetNamedQuery("GetPersonSummaries"); var personSummaryArray = query.List(); Assert.That(personSummaryArray, Is.Not.Null); This works fine, but I would like to have one unit test that loops thru all of the named queries and executes them. Is there a way to discover all of the available named queries? Thanks Dan

    Read the article

  • How can a Windows program temporarily change its time zone?

    - by Rob Kennedy
    I've written a function to return the time_t value corresponding to midnight on a given day. When there is no midnight for a given day, it returns the earliest time available; that situation can occur, for example, when Egypt enters daylight-saving time. This year, the time change takes effect at midnight on the night of April 29, so the clock goes directly from 23:59 to 01:00. Now I'm writing unit tests for this function, and one of the tests should replicate the Egypt scenario. In Unix, I can accomplish it like this: putenv("TZ", "Egypt", true); tzset(); After doing that, further calls to localtime behave as if they're in Egypt instead of Minnesota, and my tests pass. Merely setting the environment variable doesn't have any effect on Windows, though. What can I do to make the unit test think it's somewhere else without affecting the rest of the programs running on the system?

    Read the article

  • What is the correct way to unit test areas around exceptions

    - by Codek
    Hi, Looking at our code coverage of our unit tests we're quite high. But the last few % is tricky because a lot of them are catching things like database exceptions - which in normal circumstances just dont happen. For example the code prevents fields being too long etc, so the only possible database exceptions are if the DB is broken/down, or if the schema is changed under our feet. So is the only way to Mock the objects such that the exception can be thrown? That seems a little bit pointless. Perhaps it's better to just accept not getting 100% code coverage? Thanks, Dan

    Read the article

  • Django formset unit test

    - by Py
    I can't running Unit Test with formset. I try to do a test: class NewClientTestCase(TestCase): def setUp(self): self.c = Client() def test_0_create_individual_with_same_adress(self): post_data = { 'ctype': User.CONTACT_INDIVIDUAL, 'username': 'dupond.f', 'email': '[email protected]', 'password': 'pwd', 'password2': 'pwd', 'civility': User.CIVILITY_MISTER, 'first_name': 'François', 'last_name': 'DUPOND', 'phone': '+33 1 34 12 52 30', 'gsm': '+33 6 34 12 52 30', 'fax': '+33 1 34 12 52 30', 'form-0-address1': '33 avenue Gambetta', 'form-0-address2': 'apt 50', 'form-0-zip_code': '75020', 'form-0-city': 'Paris', 'form-0-country': 'FRA', 'same_for_billing': True, } response = self.c.post(reverse('client:full_account'), post_data, follow=True) self.assertRedirects(response, '%s?created=1' % reverse('client:dashboard')) and i have this error: ValidationError: [u'ManagementForm data is missing or has been tampered with'] My view : def full_account(request, url_redirect=''): from forms import NewUserFullForm, AddressForm, BaseArticleFormSet fields_required = [] fields_notrequired = [] AddressFormSet = formset_factory(AddressForm, extra=2, formset=BaseArticleFormSet) if request.method == 'POST': form = NewUserFullForm(request.POST) objforms = AddressFormSet(request.POST) if objforms.is_valid() and form.is_valid(): user = form.save() address = objforms.forms[0].save() if url_redirect=='': url_redirect = '%s?created=1' % reverse('client:dashboard') logon(request, form.instance) return HttpResponseRedirect(url_redirect) else: form = NewUserFullForm() objforms = AddressFormSet() return direct_to_template(request, 'clients/full_account.html', { 'form':form, 'formset': objforms, 'tld_fr':False, }) and my form file : class BaseArticleFormSet(BaseFormSet): def clean(self): msg_err = _('Ce champ est obligatoire.') non_errors = True if 'same_for_billing' in self.data and self.data['same_for_billing'] == 'on': same_for_billing = True else: same_for_billing = False for i in [0, 1]: form = self.forms[i] for field in form.fields: name_field = 'form-%d-%s' % (i, field ) value_field = self.data[name_field].strip() if i == 0 and self.forms[0].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False elif i == 1 and not same_for_billing and self.forms[1].fields[field].required and value_field =='': form.errors[field] = msg_err non_errors = False return non_errors class AddressForm(forms.ModelForm): class Meta: model = Address address1 = forms.CharField() address2 = forms.CharField(required=False) zip_code = forms.CharField() city = forms.CharField() country = forms.ChoiceField(choices=CountryField.COUNTRIES, initial='FRA')

    Read the article

  • Getting Bad file descriptor when running Tornado AsyncHTTPTestCase

    - by Will
    When running a test using the Tornado AsyncHTTPTestCase I'm getting a stack trace that isn't related to the test. The test is passing so this is probably happening on the test clean up? I'm using Python 2.7.2, Tornado 2.2. The test code is: class AllServersHandlerTest(AsyncHTTPTestCase): endpoint = AllServersHandler.endpoint # '/rest/test/' def test_server_status_with_advertiser(self): on_new_host(None, '127.0.0.1') response = self.fetch(self.endpoint, method='GET') result = json.loads(response.body, 'utf8').get('data') self.assertEquals(['127.0.0.1'], result) The test passes ok, but I get the following stack trace from the Tornado server. OSError: [Errno 9] Bad file descriptor INFO:root:200 POST /rest/serverStatuses (127.0.0.1) 0.00ms DEBUG:root:error closing fd 688 Traceback (most recent call last): File "C:\Python27\Lib\site-packages\tornado-2.2-py2.7.egg\tornado\ioloop.py", line 173, in close os.close(fd) OSError: [Errno 9] Bad file descriptor Any ideas how to cleanly shutdown the test case?

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • Optimizing MySQL for ALTER TABLE of InnoDB

    - by schuilr
    Sometime soon we will need to make schema changes to our production database. We need to minimize downtime for this effort, however, the ALTER TABLE statements are going to run for quite a while. Our largest tables have 150 million records, largest table file is 50G. All tables are InnoDB, and it was set up as one big data file (instead of a file-per-table). We're running MySQL 5.0.46 on an 8 core machine, 16G memory and a RAID10 config. I have some experience with MySQL tuning, but this usually focusses on reads or writes from multiple clients. There is lots of info to be found on the Internet on this subject, however, there seems to be very little information available on best practices for (temporarily) tuning your MySQL server to speed up ALTER TABLE on InnoDB tables, or for INSERT INTO .. SELECT FROM (we will probably use this instead of ALTER TABLE to have some more opportunities to speed things up a bit). The schema changes we are planning to do is adding a integer column to all tables and make it the primary key, instead of the current primary key. We need to keep the 'old' column as well so overwriting the existing values is not an option. What would be the ideal settings to get this task done as quick as possible?

    Read the article

  • Find out which row caused the error

    - by Felipe Fiali
    I have a big fat query that's written dynamically to integrate some data. Basically what it does is query some tables, join some other ones, treat some data, and then insert it into a final table. The problem is that there's too much data, and we can't really trust the sources, because there could be some errored or inconsistent data. For example, I've spent almost an hour looking for an error while developing using a customer's database because somewhere in the middle of my big fat query there was an error converting some varchar to datetime. It turned out to be that they had some sales dating '2009-02-29', an out-of-range date. And yes, I know. Why was that stored as varchar? Well, the source database has 3 columns for dates, 'Month', 'Day' and 'Year'. I have no idea why it's like that, but still, it is. But how the hell would I treat that, if the source is not trustable? I can't HANDLE exceptions, I really need that it comes up to another level with the original message, but I wanted to provide some more info, so that the user could at least try to solve it before calling us. So I thought about displaying to the user the row number, or some ID that would at least give him some idea of what record he'd have to correct. That's also a hard job because there will be times when the integration will run up to 80000 records. And in an 80000 records integration, a single dummy error message: 'The conversion of a varchar data type to a datetime data type resulted in an out-of-range datetime value' means nothing at all. So any idea would be appreciated. Oh I'm using SQL Server 2005 with Service Pack 3.

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >