Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 242/883 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • Seam unit test can't connect to JBoss4.0.5GA

    - by user240423
    Does anybody know how to connect with JBoss4.0.5GA in Seam2.2.0GA unit test? I'm using Seam2.2.0GA with the embeded JBoss to run the unit test, the module needs to call old JBoss server (EJB2, because the vendor locked in JCA has to deploy on old JBoss). it's typical seam test case, and I can't get it connect to JBoss4.0.5GA by using jbossall-client.jar from JBoss4.0.5GA. so far I tested the embeded JBoss can only working with JBoss5.X server. with JBoss4.2.3GA, it report: [testng] java.rmi.MarshalException: Failed to communicate. Problem during marshalling/unmarshalling; nested exception is: [testng] java.net.SocketException: end of file [testng] at org.jboss.remoting.transport.socket.SocketClientInvoker.handleException(SocketClientInvoker.java:122) [testng] at org.jboss.remoting.transport.socket.MicroSocketClientInvoker.transport(MicroSocketClientInvoker.java:679) [testng] at org.jboss.remoting.MicroRemoteClientInvoker.invoke(MicroRemoteClientInvoker.java:122) [testng] at org.jboss.remoting.Client.invoke(Client.java:1634) [testng] at org.jboss.remoting.Client.invoke(Client.java:548) This is the best result I could get (JBoss5.1.0GA jbossall-client.jar call JBoss4.2.3GA server in the embeded JBoss env), here is the ant script: <testng classpathref="build.test.classpath" outputDir="${target.test-reports.dir}" haltOnfailure="true"> <jvmarg line="-Dsun.lang.ClassLoader.allowArraySyntax=true" /> <jvmarg line="-Dorg.jboss.j2ee.Serialization=true" /> <!-- <jvmarg line="-Djboss.remoting.pre_2_0_compatible=true" /> --> <jvmarg line="-Djboss.jndi.url=jnp://localhost:1099" /> <xmlfileset dir="src/test/java" includes="${ant.project.name}-testsuite.xml" /> </testng> Can anybody help?

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • How to fire server-side methods with jQuery

    - by Nasser Hajloo
    I have a large application and I'm going to enabling short-cut key for it. I'd find 2 JQuery plug-ins (demo plug-in 1 - Demo plug-in 2) that do this for me. you can find both of them in this post in StackOverFlow My application is a completed one and I'm goining to add some functionality to it so I don't want towrite code again. So as a short-cut is just catching a key combination, I'm wonder how can I call the server methods which a short-cut key should fire? So How to use either of these plug-ins, by just calling the methods I'd written before? Actually How to fire Server methods with Jquery? You can also find a good article here, by Dave Ward Update: here is the scenario. When User press CTRL+Del the GridView1_OnDeleteCommand so I have this protected void grdDocumentRows_DeleteCommand(object source, System.Web.UI.WebControls.DataGridCommandEventArgs e) { try { DeleteRow(grdDocumentRows.DataKeys[e.Item.ItemIndex].ToString()); clearControls(); cmdSaveTrans.Text = Hajloo.Portal.Common.Constants.Accounting.Documents.InsertClickText; btnDelete.Visible = false; grdDocumentRows.EditItemIndex = -1; BindGrid(); } catch (Exception ex) { Page.AddMessage(GetLocalResourceObject("AProblemAccuredTryAgain").ToString(), MessageControl.TypeEnum.Error); } } private void BindGrid() { RefreshPage(); grdDocumentRows.DataSource = ((DataSet)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.TRANSACTIONS_TABLE]; grdDocumentRows.DataBind(); } private void RefreshPage() { Creditors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_CREDITORS_SUM_FIELD]; Debtors = (decimal)((AccDocument)Session[Hajloo.Portal.Common.Constants.Accounting.Session.AccDocument]).Tables[AccDocument.ACCDOCUMENT_TABLE].Rows[0][AccDocument.ACCDOCUMENT_DEBTORS_SUM_FIELD]; if ((Creditors - Debtors) != 0) labBalance.InnerText = GetLocalResourceObject("Differentiate").ToString() + "?" + (Creditors - Debtors).ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF) + "?"; else labBalance.InnerText = GetLocalResourceObject("Balance").ToString(); lblSumDebit.Text = Debtors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); lblSumCredit.Text = Creditors.ToString(Hajloo.Portal.Common.Constants.Common.Documents.CF); if (grdDocumentRows.EditItemIndex == -1) clearControls(); } Th other scenario are the same. How to enable short-cut for these kind of code (using session , NHibernate, etc)

    Read the article

  • How can I test that my hash function is good in terms of max-load?

    - by philcolbourn
    I have read through various papers on the 'Balls and Bins' problem and it seems that if a hash function is working right (ie. it is effectively a random distribution) then the following should/must be true if I hash n values into a hash table with n slots (or bins): Probability that a bin is empty, for large n is 1/e. Expected number of empty bins is n/e. Probability that a bin has k collisions is <= 1/k!. Probability that a bin has at least k collisions is <= (e/k)**k. These look easy to check. But the max-load test (the maximum number of collisions with high probability) is usually stated vaguely. Most texts state that the maximum number of collisions in any bin is O( ln(n) / ln(ln(n)) ). Some say it is 3*ln(n) / ln(ln(n)). Other papers mix ln and log - usually without defining them, or state that log is log base e and then use ln elsewhere. Is ln the log to base e or 2 and is this max-load formula right and how big should n be to run a test? This lecture seems to cover it best, but I am no mathematician. http://pages.cs.wisc.edu/~shuchi/courses/787-F07/scribe-notes/lecture07.pdf BTW, with high probability seems to mean 1 - 1/n.

    Read the article

  • Problem while executing test case in VS2008 test project

    - by sukumar
    Hi all I have the situation as follows I have develpoed one test project in visual studio 2008 to test my target project. I was getting the following exception when i ran test case in my PC System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.GetType(UnitTestElement unitTest, String type) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.ResolveMethods(). but the same project runs successfully in my colleague PC. as per my Understanding System.IO.FileNotFoundException will occur in case of missing out the dlls. i checked up with dependency walker to trace out the missed dll.dependency walke traced out the following dlls 1)MFC90D.dll 2)mSvcr90d.dll 3)msvcp90d.dll i copied this dlls to C:\windows\system32 from Microsoft visual studio 9.0 dir and again i ran the dependency walker.this time dependency walker is able to open the given testproject dll with 0 errors .even then the same exception comes up when i ran the test. i got fed up with this. can any one tell why it is behaving as PC dependent.is there any thing that i still missing? any suggestion can be helpfull Thakns in Advance Sukumar i

    Read the article

  • How to test an application for correct encoding (e.g. UTF-8)

    - by Olaf
    Encoding issues are among the one topic that have bitten me most often during development. Every platform insists on its own encoding, most likely some non-UTF-8 defaults are in the game. (I'm usually working on Linux, defaulting to UTF-8, my colleagues mostly work on german Windows, defaulting to ISO-8859-1 or some similar windows codepage) I believe, that UTF-8 is a suitable standard for developing an i18nable application. However, in my experience encoding bugs are usually discovered late (even though I'm located in Germany and we have some special characters that along with ISO-8859-1 provide some detectable differences). I believe that those developers with a completely non-ASCII character set (or those that know a language that uses such a character set) are getting a head start in providing test data. But there must be a way to ease this for the rest of us as well. What [technique|tool|incentive] are people here using? How do you get your co-developers to care for these issues? How do you test for compliance? Are those tests conducted manually or automatically? Adding one possible answer upfront: I've recently discovered fliptitle.com (they are providing an easy way to get weird characters written "u?op ?pisdn" *) and I'm planning on using them to provide easily verifiable UTF-8 character strings (as most of the characters used there are at some weird binary encoding position) but there surely must be more systematic tests, patterns or techniques for ensuring UTF-8 compatibility/usage. Note: Even though there's an accepted answer, I'd like to know of more techniques and patterns if there are some. Please add more answers if you have more ideas. And it has not been easy choosing only one answer for acceptance. I've chosen the regexp answer for the least expected angle to tackle the problem although there would be reasons to choose other answers as well. Too bad only one answer can be accepted. Thank you for your input. *) that's "upside down" written "upside down" for those that cannot see those characters due to font problems

    Read the article

  • JPA - How to truncate tables between unit tests

    - by Theo
    I want to cleanup the database after every test case without rolling back the transaction. I have tried DBUnit's DatabaseOperation.DELETE_ALL, but it does not work if a deletion violates a foreign key constraint. I know that I can disable foreign key checks, but that would also disable the checks for the tests (which I want to prevent). I'm using JUnit 4, JPA 2.0 (Eclipselink), and Derby's in-memory database. Any ideas? Thanks, Theo

    Read the article

  • PHP unit tests for controller returns no errors and no success message.

    - by Mallika Iyer
    I'm using the zend modular director structure, i.e. application modules users controllers . . lessons reports blog I have a unit test for a controller in 'blog' that goes something like the below section of code: I'm definitely doing something very wrong, or missing something - as when i run the test, i get no error, no success message (that goes usually like ...OK (2 tests, 2 assertions)). I get all the text from layout.phtml, where i have the global site layout. This is my first endeavor writing a unittest for zend-M-V-C structure so probably I'm missing something important? Here goes.... require_once '../../../../public/index.php'; require_once '../../../../application/Bootstrap.php'; require_once '../../../../application/modules/blog/controllers/BrowseController.php'; require_once '../../../TestConfiguration.php'; class Blog_BrowseControllerTest extends Zend_Test_PHPUnit_ControllerTestCase { public function setUp() { $this->bootstrap = array($this, 'appBootstrap'); Blog_BrowseController::setUp(); } public function appBootstrap() { require_once dirname(__FILE__) . '/../../bootstrap.php'; } public function testAction() { $this->dispatch('/'); $this->assertController('browse'); $this->assertAction('index'); } public function tearDown() { $this->resetRequest(); $this->resetResponse(); Blog_BrowseController::tearDown(); } }

    Read the article

  • XCode missing inline test results

    - by Vegar
    Everywhere there are pretty pictures of failing tests shown inline in the code editor, like in Peepcodes Objective-C for Rubyist screencast and in apples own technical documentation: When I build my test-target, all I get is a little red icon down in the right corner, stating something went wrong. When clicking on it, I get the Build Results, where I can start to hunt for test results. Do anyone have a clue on what´s wrong?

    Read the article

  • ASP.NET MVC: Is it good to access HttpContext in a controller?

    - by Zach
    I've been working with ASP.NET(WebForm) for a while, but new to ASP.NET MVC. From many articles I've read, in most cases the reason that the controllers are hard to test is because they are accessing the runtime components: HttpContext (including Request, Response ...). Accessing HttpContext in a controller seems bad. However, I must access these components somewhere, reading input from Request, sending results back via Response, and using Session to hold a few state variables. So where is the best place to access these runtime components if we don't access them in a controller? Best regards, Zach@Shine

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Practical approach to concurrency control

    - by Industrial
    Hi everyone, I'd read this article recently and are very interested on how to make a practical approach to Concurrency control on a web server. The server will run CentOS + PHP + mySQL with Memcached. How would you set it up to work? http://saasinterrupted.com/2010/02/05/high-availability-principle-concurrency-control/ Thanks!

    Read the article

  • How well do zippers perform in practice, and when should they be used?

    - by Rob
    I think that the zipper is a beautiful idea; it elegantly provides a way to walk a list or tree and make what appear to be local updates in a functional way. Asymptotically, the costs appear to be reasonable. But traversing the data structure requires memory allocation at each iteration, where a normal list or tree traversal is just pointer chasing. This seems expensive (please correct me if I am wrong). Are the costs prohibitive? And what under what circumstances would it be reasonable to use a zipper?

    Read the article

  • What C# mocking framework to use?

    - by Corin
    I want to start using mock objects on my next C# project. After a quick google I've found there are many: NMock EasyMock.NET TypeMock Isolator Commercial / Paid Rhino Mocks Moq So my question is: what one is your favourite .NET mocking framework and why?

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • Including uncovered files in Devel::Cover reports

    - by Markus
    I have a project setup like this: bin/fizzbuzz-game.pl lib/FizzBuzz.pm test/TestFizzBuzz.pm test/TestFizzBuzz.t When I run coverage on this, using perl -MDevel::Cover=-db,/tmp/cover_db test/*.t ... I get the following output: ----------------------------------- ------ ------ ------ ------ ------ ------ File stmt bran cond sub time total ----------------------------------- ------ ------ ------ ------ ------ ------ lib/FizzBuzz.pm 100.0 100.0 n/a 100.0 1.4 100.0 test/TestFizzBuzz.pm 100.0 n/a n/a 100.0 97.9 100.0 test/TestFizzBuzz.t 100.0 n/a n/a 100.0 0.7 100.0 Total 100.0 100.0 n/a 100.0 100.0 100.0 ----------------------------------- ------ ------ ------ ------ ------ ------ That is: the totally-uncovered file bin/fizzbuzz-game.pl is not included in the results. How do I fix this?

    Read the article

  • Mocking attributes - C#

    - by bob
    I use custom Attributes in a project and I would like to integrate them in my unit-tests. Now I use Rhino Mocks to create my mocks but I don't see a way to add my attributes (and there parameters) to them. Did I miss something, or is it not possible? Other mocking framework? Or do I have to create dummy implementations with my attributes? example: I have an interface in a plugin-architecture (IPlugin) and there is an attribute to add meta info to a property. Then I look for properties with this attribute in the plugin implementation for extra processing (storing its value, mark as gui read-only...) Now when I create a mock can I add easily an attribute to a property or the object instance itself? EDIT: I found a post with the same question - link. The answer there is not 100% and it is Java... EDIT 2: It can be done... searched some more (on SO) and found 2 related questions (+ answers) here and here Now, is this already implemented in one or another mocking framework?

    Read the article

  • Issues with installing PHPUnit

    - by user1045696
    So I've installed PHP Unit via PEAR (all the files are there, I've checked). However, when I try to run a test I get: Warning: require_once(PHPUnit/Framework.php) [function.require-once]: failed to open stream: No such file or directory in C:\WAMP\www\ExampleTests\arraytest.php on line 2 I'm guessing this has something to do with my PHPUnit installation not updating the include_path properly, but I'm not too sure what to update it to? I'm on Windows (7), using WAMP. Cheers! EDIT: The bottom of PHP.ini contains: ;***** Added by go-pear include_path=".;C:\WAMP\bin\php\php5.3.10\pear" ;***** I also get the error: Fatal error: require_once() [function.require]: Failed opening required 'PHPUnit/Framework.php' (include_path='.;C:\php\pear') However, after looking in PHP.ini, there's no include path that points to C:\php\pear?

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >