Search Results

Search found 24383 results on 976 pages for 'configuration testing'.

Page 118/976 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Enabling depth testing when using CAOpenGLLayer

    - by Andrew
    If one is using a subclass of NSOpenGLView then one enables depth testing by selecting a 16/24/32 bit buffer from the attributes menu in Xcode, and then adding glEnable(GL_DEPTH_TEST); glClear(GL_DEPTH_BUFFER_BIT); to the drawRect method. However, in the application I'm creating I'm rendering OpenGL content via the drawInCGLContext method of a CAOpenGLLayer which is contained within a subclass of NSView. This means that it is no longer possible to create a depth buffer via the inspector. Does anyone know how I can achieve this in such a situation?

    Read the article

  • Thread testing for time

    - by DanielFH
    Hi there :) I'm making a thread for my application that's going to do an exit operation at a given time (only hours and minutes, day/month doesn't matter). Is this the right way to do it, and also the right way to test for time? I'm testing for a 24 hour clock by the way, not AM / PM. I'm then in another class going to call this something like new Thread(new ExitThread()).start(); public class ExitThread implements Runnable { @Override public void run() { Date date = new Date(System.currentTimeMillis()); String time = new SimpleDateFormat("HHmmss").format(date); int currentTime = Integer.parseInt(time); int exitTime = 233000; while(true) { try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } if(currentTime >= exitTime ) { // do exit operation here } } } Thanks. //D

    Read the article

  • Good tool to test WCF service (not SOAP/HTTP webservice)

    - by Kangkan
    I recently found SOAPUI and discovered that it is just a great tool for testing any SOAP/HTTP service. Conventionally, we have been developing our own driver to test our services (WCF based netTCP binding) so far. But with SOAPUI experience, I am really looking for some such tool that can be used with such ease with built-in facilities for load testing, functional testing etc. The other thought in my mind is that for services that I wish to deploy with netTCP can be first tested using a HTTP binding using SOAPUI. Once found suitable, the binding can be changed for the intended one. I shall like the views from all the experts here. Thanks.

    Read the article

  • Why is log4j not behaving as expected?

    - by Kieveli
    I have a co-worker who is trying to get log4j to behave as follows: Log to Stdout By default, disable most output Show only messages from java.sql.PrepareStatement at level debug and up He's getting caught up in the 'level' vs 'priority'. Here is his config file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE log4j:configuration SYSTEM "D:/Java/apache-log4j-1.2.15/src/main/resources/org/apache/log4j/xml/log4j.dtd" > <log4j:configuration> <!-- Appenders --> <appender name="stdout" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%5p %d{ISO8601} [%t][%x] %c - %m%n" /> </layout> </appender> <!-- Loggers for ibatus and JDBC database --> <logger name="java.sql.PreparedStatement"> <level value="debug"/> </logger> <!-- The Root Logger --> <root> <level value="error"/> <appender-ref ref="stdout"/> </root> </log4j:configuration> The output from this shows no messages in the log output. How does he need to change his log4j.xml config file to make it behave as he's expecting?

    Read the article

  • Unittest and mock

    - by user1410756
    I'm testing with unittest in python and it's ok. Now, I have introduced mock and I need to resolve a question. This is my code: from mock import Mock import unittest class Matematica(object): def __init__(self, op1, op2): self.op1 = op1 self.op2 = op2 def adder(self): return self.op1 + self.op2 def subs(self): return abs(self.op1 - self.op2) def molt(self): return self.op1 * self.op2 def divid(self): return self.op1 / self.op2 class TestMatematica(unittest.TestCase): """Test della classe Matematica""" def testing(self): """Somma""" mat = Matematica(10,20) self.assertEqual(mat.adder(),30) """Sottrazione""" self.assertEqual(mat.subs(),10) class test_mock(object): def __init__(self, matematica): self.matematica = matematica def execute(self): self.matematica.adder() self.matematica.adder() self.matematica.subs() if __name__ == "__main__": result = unittest.TextTestRunner(verbosity=2).run(TestMatematica('testing')) a = Matematica(10,20) b = test_mock(a) b.execute() mock_foo = Mock(b.execute)#return_value = 'rafa') mock_foo() print mock_foo.called print mock_foo.call_count print mock_foo.method_calls This code is functionally and result of print is: True, 1, [] . Now, I need to count how many times are called self.matematica.adder() and self.matematica.subs() . THANKS

    Read the article

  • Testing for the existence of a field in a class

    - by Brett
    Hi, i have a quick question. I have a 2D array that stores an instance of a class. The elements of the array are assigned a particular class based on a text file that is read earlier in the program. Since i do not know without looking in the file what class is stored at a particular element i could refer to a field that doesn't exist at that index (referring to appearance when an instance of temp is stored in that index). i have come up with a method of testing this, but it is long winded and requires a second matrix. Is there a function to test for the existence of a field in a class? class temp(): name = "default" class temp1(): appearance = "@"

    Read the article

  • Stacked up with web service configuration

    - by Allan Chua
    I'm currently stacked with the web service that im creating right now. when Testing it in local it all works fine but when I try to deploy it to the web server it throws me the following error An error occurred while trying to make a request to URI '...my web service URI here....'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. here is my web config. <?xml version="1.0"?> <configuration> <configSections> </configSections> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> </modules> <validation validateIntegratedModeConfiguration="false" /> <security> <requestFiltering> <requestLimits maxAllowedContentLength="2000000000" /> </requestFiltering> </security> </system.webServer> <connectionStrings> <add name="........" providerName="System.Data.SqlClient" /> </connectionStrings> <appSettings> <!-- Testing --> <add key="DataConnectionString" value="..........." /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".rdlc" type="Microsoft.Reporting.RdlBuildProvider, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </buildProviders> </compilation> <httpRuntime executionTimeout="1200" maxRequestLength="2000000" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <behaviors> <serviceBehaviors> <behavior name="Service1"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="nextSPOTServiceBehavior"> <serviceMetadata httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="SecureBasic" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> <readerQuotas maxArrayLength="2000000" maxStringContentLength="2000000"/> </binding> <binding name="BasicHttpBinding_IDownloadManagerService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="nextSPOTServiceBehavior" name="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.DownloadManagerService"> <endpoint binding="basicHttpBinding" bindingConfiguration="SecureBasic" name="basicHttpSecure" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <!--<endpoint binding="basicHttpBinding" bindingConfiguration="" name="basicHttp" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" />--> <!--<endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IDownloadManagerService" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" />--> </service> </services > </system.serviceModel> </configuration>

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

    - by Marius
    In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented. So what do you think is the best approach to counter this? 1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass. 2) Favor manual, fully implemented fakes. 3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

    Read the article

  • Python unittest with expensive setup

    - by Staale
    My test file is basically: class Test(unittest.TestCase): def testOk(): pass if __name__ == "__main__": expensiveSetup() try: unittest.main() finally: cleanUp() However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at http://stackoverflow.com/questions/402483/caching-result-of-setup-using-python-unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed. How can I do the setup and cleanup once for all the tests in my TestSuite? The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Multiple arrangements/asserts per unit test?

    - by lance
    A group of us (.NET developers) are talking unit testing. Not any one framework (we've hit on MSpec, NUint, MSTest, RhinoMocks, TypeMock, etc) -- we're just talking generally. We see lots of syntax that forces a distinct unit test per scenario, but we don't see an avenue to re-using one unit test with various inputs or scenarios. Also, we don't see an avenue to multiple asserts in a given test without an early assert's failure threatening the testing of later asserts (in the same test). Is there anything like that happening in .NET unit testing (state- or behavior-based) today?

    Read the article

  • Grails Services / Transactions / RuntimeException / Testing

    - by Rob
    I'm testing come code in a service with transactional set to true , which talks to a customer supplied web service the main part of which looks like class BarcodeService { .. /// some stuff ... try{ cancelBarCodeResponse = cancelBarCode(cancelBarcodeRequest) } catch(myCommsException e) { throw new RuntimeException(e) } ... where myCommsException extends Exception .. I have a test which looks like // As no connection from my machine, it should fail .. shouldFailWithCause(RuntimeException){ barcodeServices.cancelBarcodeDetails() } The test fails cause it's catching a myCommsException rather than the RuntimeException i thought i'd converted it to .. Anyone care to point out what i'm doing wrong ? Also will the fact that it's not a RuntimeException mean any transaction related info done before my try/catch actually be written out rather than thrown away ?? Thanks

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • testing an in app purchase??

    - by hemant
    i developed a application with in app purchases..when user buys the subscription it gets stored on my server..after testing it few times i deleted the data from server to test it again but when i buy it the sandbox environment says u you already purchased this.TAP OK to download it again for free also i have used this test account on my previous application?? does it means i will have to create a new test account for this application?? also by mistake i used this account on apple store..i read somewhere that doing this will make your test account invalid...is it true?? should i create a new account for it??

    Read the article

  • What's a unit test? [closed]

    - by Tyler
    Possible Duplicates: What is unit testing and how do you do it? What is unit testing? I recognize that to 95% of you, this is a very WTF question. So. What's a unit test? I understand that essentially you're attempting to isolate atomic functionality but how do you test for that? When is it necessary? When is it ridiculous? Can you give an example? (Preferably in C? I mostly hear about it from Java devs on this site so maybe this is specific to Object Oriented languages? I really don't know.) I know many programmers swear by unit testing religiously. What's it all about? EDIT: Also, what's the ratio of time you typically spend writing unit tests to time spent writing new code?

    Read the article

  • C#: How to unit test a method that relies on another method within the same class?

    - by michael paul
    I have a class similar to the following: public class MyProxy : ClientBase<IService>, IService { public MyProxy(String endpointConfiguration) : base(endpointConfiguration) { } public int DoSomething(int x) { int result = DoSomethingToX(x); //This passes unit testing int result2 = ((IService)this).DoWork(x) //do I have to extract this part into a separate method just //to test it even though it's only a couple of lines? //Do something on result2 int result3 = result2 ... return result3; } int IService.DoWork(int x) { return base.Channel.DoWork(x); } } The problem lies in the fact that when testing I don't know how to mock the result2 item without extracting the part that gets result3 using result2 into a separate method. And, because it is unit testing I don't want to go that deep as to test what result2 comes back as... I'd rather mock the data somehow... like, be able to call the function and replace just that one call.

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • Selenium testing with checksums (md5)

    - by Peter
    I am new at selenium testing and am writing a bunch of tests for a webpage that relies heavily on javascript user interaction. At first I wrote a lot of assertions of the style If I press button A" then assert number of visible rows = x, assert checkboxes checked are such assert title = bar .... [20 more] and so on. Then I switched to checksumming the HTML using MD5: If I press button A" then assert md5(html) = 8548bccac94e35d9836f1fec0da8115c. And it made my life a whole lot easier... But is this a bad practice in any way?

    Read the article

  • What to use to write tests? [Rails]

    - by yuval
    I asked a question about different testing frameworks yesterday. This question can be found here. Now that I have a better understanding of the different frameworks, I have a very simple question: With a basic understanding, but very limited experience with writing tests with rails' built in testing framework (basic assertions), would it be okay for me to jump directly to testing with RSpec, Webrat, and Cucamber? Thank you! As a side note: yes, this is an opinion based question, but I feel that the input received to this question is valuable enough to the community to keep this question open. Thanks.

    Read the article

  • Tests that are 2-3 times bigger than the testable code

    - by HeavyWave
    Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more). Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place. I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

    Read the article

  • Perfmon: which counter identifies that threads are waiting?

    - by frankadelic
    While load testing an ASP.NET app, we find that the pages are taking 20-30 sec under heavy load. We suspect this is because the pages are waiting for database calls or web services. Is there a particular perfmon counter that can identify this sort of bottleneck on the web servers? CPU, Memory, and Disk are normal. Or must we use a tool other than perfmon to track down this bottleneck?

    Read the article

  • How to: mirror a staging server from a production server

    - by Zombies
    We want to mirror our current production app server (Oracle Application Server) onto our staging server. As it stands right now, various things are out of sync, and what may work in testing/QA can easily fail in production because of settings/patch/etc inconsistencies. I was thinking what would be best is to clone the entire disk daily and push it onto the staging server... Would this be the best method...? (note: these are all windows servers)

    Read the article

  • Generate a limited amount of random network traffic between 2 hosts

    - by Andrew S
    I'm trying to find a utility that will allow me to generate a constant flow of random network traffic at a specified rate between 2 hosts. The utility needs to run on Windows and OSX. I've tried iperf but it seems to be more oriented toward short-term testing/statistics and it really taxes the CPU even at slower rates. I want something that will generate traffic for a few weeks at say 10Mbps while I use other tools to monitor the impact of that level of traffic on the network.

    Read the article

  • Bind can only work for the DNS server inside zone

    - by Bob
    I got a big problem when I added a new zone to my current Bind configuration. ===============/etc/named.conf=============== include "/etc/rndc.key"; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndckey"; }; }; acl "trusted" { 127.0.0.1; 208.43.81.157; 69.4.236.88; }; options { directory "/var/named"; allow-query { any; }; recursion yes; allow-recursion { trusted; }; }; zone "." { type hint; file "root.hints"; }; zone "2comu.com" { type master; file "2comu.com.db"; allow-update { none; }; }; zone "usa-diamond.com" { type master; file "usa-diamond.com.db"; allow-update { none; }; }; ===============/var/named/2comu.com.db=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.2comu.com. ( 2011011101 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. IN MX 10 email.2comu.com. ns1.2comu.com. IN A 208.43.81.157 ns2.2comu.com. IN A 69.4.236.88 www.2comu.com. IN A 208.43.81.157 ftp.2comu.com. IN A 208.43.81.157 email.2comu.com. IN A 208.43.81.157 ===============/var/named/usa-diamond.com=============== $TTL 86400 @ IN SOA ns1.2comu.com. root.usa-diamond.com. ( 2011011115 3600 300 3600000 3600 ) IN NS ns1.2comu.com. IN NS ns2.2comu.com. www.usa-diamond.com. IN A 208.43.81.157 ================================================================ All of the configurations inside domain 2comu.com work well. But when www.usa-diamond.com doesn't work at all. When I tried "dig +trace www.usa-diamond.com", I got the following message ================================================================ ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> +trace usa-diamond.com ;; global options: printcmd . 517603 IN NS c.root-servers.net. . 517603 IN NS d.root-servers.net. . 517603 IN NS e.root-servers.net. . 517603 IN NS f.root-servers.net. . 517603 IN NS g.root-servers.net. . 517603 IN NS h.root-servers.net. . 517603 IN NS i.root-servers.net. . 517603 IN NS j.root-servers.net. . 517603 IN NS k.root-servers.net. . 517603 IN NS l.root-servers.net. . 517603 IN NS m.root-servers.net. . 517603 IN NS a.root-servers.net. . 517603 IN NS b.root-servers.net. ;; Received 500 bytes from 208.43.81.157#53(208.43.81.157) in 0 ms com. 172800 IN NS j.gtld-servers.net. com. 172800 IN NS d.gtld-servers.net. com. 172800 IN NS e.gtld-servers.net. com. 172800 IN NS i.gtld-servers.net. com. 172800 IN NS f.gtld-servers.net. com. 172800 IN NS m.gtld-servers.net. com. 172800 IN NS b.gtld-servers.net. com. 172800 IN NS k.gtld-servers.net. com. 172800 IN NS l.gtld-servers.net. com. 172800 IN NS c.gtld-servers.net. com. 172800 IN NS h.gtld-servers.net. com. 172800 IN NS a.gtld-servers.net. com. 172800 IN NS g.gtld-servers.net. ;; Received 505 bytes from 192.33.4.12#53(c.root-servers.net) in 3 ms usa-diamond.com. 172800 IN NS ns1.2comu.com. usa-diamond.com. 172800 IN NS ns2.2comu.com. ;; Received 107 bytes from 192.48.79.30#53(j.gtld-servers.net) in 177 ms ;; Received 33 bytes from 208.43.81.157#53(ns1.2comu.com) in 0 ms ========================================================================= It seems I can't get any answer from ns1.2comu.com. Can anyone give some suggestions? Thanks a lot. Bob

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >