Search Results

Search found 44056 results on 1763 pages for 'test case'.

Page 15/1763 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Can I use this power supply + case combination without causing problems?

    - by evan
    I am putting together a computer with a Antec P180 case and a Thermaltake TR2 RX 650 W power supply. The problem is that the Antec P180 case has a separate compartment for the power supply. With an opening for the on/off switch + ac connector to one side, a wall with a small hole for cables to route through on top, a wall on the bottom, and on the other side a fan which pushes air from the hard drive compartment to the power supply compartment. I think the design of the case assumes the power supplies fan is on the side next to the on/off switch, but the fan on the power supply I have is on top, which makes me worry about overheating the power supply. There is about half an inch between the top of the power supply and the wall and the other fan should keep air flowing to push out the air that the power supply pushes upwards. Do you think this setup should work, or should I go get another power supply? Thanks!! PS: This computer will be running an Ubuntu server, so it will always be on, but the rest of the components shouldn't be generating as much heat as they would on say a gaming machine.

    Read the article

  • Java: If vs. Switch

    - by _ande_turner_
    I have a piece of code with a) which I replaced with b) purely for legibility ... a) if ( WORD[ INDEX ] == 'A' ) branch = BRANCH.A; /* B through to Y */ if ( WORD[ INDEX ] == 'Z' ) branch = BRANCH.Z; b) switch ( WORD[ INDEX ] ) { case 'A' : branch = BRANCH.A; break; /* B through to Y */ case 'Z' : branch = BRANCH.Z; break; } ... will the switch version cascade through all the permutations or jump to a case ? EDIT: Some of the answers below regard alternative approaches to the approach above. I have included the following to provide context for its use. The reason I asked, the Question above, was because the speed of adding words empirically improved. This isn't production code by any means, and was hacked together quickly as a PoC. The following seems to be a confirmation of failure for a thought experiment. I may need a much bigger corpus of words than the one I am currently using though. The failure arises from the fact I did not account for the null references still requiring memory. ( doh ! ) public class Dictionary { private static Dictionary ROOT; private boolean terminus; private Dictionary A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z; private static Dictionary instantiate( final Dictionary DICTIONARY ) { return ( DICTIONARY == null ) ? new Dictionary() : DICTIONARY; } private Dictionary() { this.terminus = false; this.A = this.B = this.C = this.D = this.E = this.F = this.G = this.H = this.I = this.J = this.K = this.L = this.M = this.N = this.O = this.P = this.Q = this.R = this.S = this.T = this.U = this.V = this.W = this.X = this.Y = this.Z = null; } public static void add( final String...STRINGS ) { Dictionary.ROOT = Dictionary.instantiate( Dictionary.ROOT ); for ( final String STRING : STRINGS ) Dictionary.add( STRING.toUpperCase().toCharArray(), Dictionary.ROOT , 0, STRING.length() - 1 ); } private static void add( final char[] WORD, final Dictionary BRANCH, final int INDEX, final int INDEX_LIMIT ) { Dictionary branch = null; switch ( WORD[ INDEX ] ) { case 'A' : branch = BRANCH.A = Dictionary.instantiate( BRANCH.A ); break; case 'B' : branch = BRANCH.B = Dictionary.instantiate( BRANCH.B ); break; case 'C' : branch = BRANCH.C = Dictionary.instantiate( BRANCH.C ); break; case 'D' : branch = BRANCH.D = Dictionary.instantiate( BRANCH.D ); break; case 'E' : branch = BRANCH.E = Dictionary.instantiate( BRANCH.E ); break; case 'F' : branch = BRANCH.F = Dictionary.instantiate( BRANCH.F ); break; case 'G' : branch = BRANCH.G = Dictionary.instantiate( BRANCH.G ); break; case 'H' : branch = BRANCH.H = Dictionary.instantiate( BRANCH.H ); break; case 'I' : branch = BRANCH.I = Dictionary.instantiate( BRANCH.I ); break; case 'J' : branch = BRANCH.J = Dictionary.instantiate( BRANCH.J ); break; case 'K' : branch = BRANCH.K = Dictionary.instantiate( BRANCH.K ); break; case 'L' : branch = BRANCH.L = Dictionary.instantiate( BRANCH.L ); break; case 'M' : branch = BRANCH.M = Dictionary.instantiate( BRANCH.M ); break; case 'N' : branch = BRANCH.N = Dictionary.instantiate( BRANCH.N ); break; case 'O' : branch = BRANCH.O = Dictionary.instantiate( BRANCH.O ); break; case 'P' : branch = BRANCH.P = Dictionary.instantiate( BRANCH.P ); break; case 'Q' : branch = BRANCH.Q = Dictionary.instantiate( BRANCH.Q ); break; case 'R' : branch = BRANCH.R = Dictionary.instantiate( BRANCH.R ); break; case 'S' : branch = BRANCH.S = Dictionary.instantiate( BRANCH.S ); break; case 'T' : branch = BRANCH.T = Dictionary.instantiate( BRANCH.T ); break; case 'U' : branch = BRANCH.U = Dictionary.instantiate( BRANCH.U ); break; case 'V' : branch = BRANCH.V = Dictionary.instantiate( BRANCH.V ); break; case 'W' : branch = BRANCH.W = Dictionary.instantiate( BRANCH.W ); break; case 'X' : branch = BRANCH.X = Dictionary.instantiate( BRANCH.X ); break; case 'Y' : branch = BRANCH.Y = Dictionary.instantiate( BRANCH.Y ); break; case 'Z' : branch = BRANCH.Z = Dictionary.instantiate( BRANCH.Z ); break; } if ( INDEX == INDEX_LIMIT ) branch.terminus = true; else Dictionary.add( WORD, branch, INDEX + 1, INDEX_LIMIT ); } public static boolean is( final String STRING ) { Dictionary.ROOT = Dictionary.instantiate( Dictionary.ROOT ); return Dictionary.is( STRING.toUpperCase().toCharArray(), Dictionary.ROOT, 0, STRING.length() - 1 ); } private static boolean is( final char[] WORD, final Dictionary BRANCH, final int INDEX, final int INDEX_LIMIT ) { Dictionary branch = null; switch ( WORD[ INDEX ] ) { case 'A' : branch = BRANCH.A; break; case 'B' : branch = BRANCH.B; break; case 'C' : branch = BRANCH.C; break; case 'D' : branch = BRANCH.D; break; case 'E' : branch = BRANCH.E; break; case 'F' : branch = BRANCH.F; break; case 'G' : branch = BRANCH.G; break; case 'H' : branch = BRANCH.H; break; case 'I' : branch = BRANCH.I; break; case 'J' : branch = BRANCH.J; break; case 'K' : branch = BRANCH.K; break; case 'L' : branch = BRANCH.L; break; case 'M' : branch = BRANCH.M; break; case 'N' : branch = BRANCH.N; break; case 'O' : branch = BRANCH.O; break; case 'P' : branch = BRANCH.P; break; case 'Q' : branch = BRANCH.Q; break; case 'R' : branch = BRANCH.R; break; case 'S' : branch = BRANCH.S; break; case 'T' : branch = BRANCH.T; break; case 'U' : branch = BRANCH.U; break; case 'V' : branch = BRANCH.V; break; case 'W' : branch = BRANCH.W; break; case 'X' : branch = BRANCH.X; break; case 'Y' : branch = BRANCH.Y; break; case 'Z' : branch = BRANCH.Z; break; } if ( branch == null ) return false; if ( INDEX == INDEX_LIMIT ) return branch.terminus; else return Dictionary.is( WORD, branch, INDEX + 1, INDEX_LIMIT ); } }

    Read the article

  • SELECT..CASE - Refactor T-SQL

    - by Nev_Rahd
    Hello Can I refactor the below SQL CASE statements into single for each case ? SELECT CASE RDV.DOMAIN_CODE WHEN 'L' THEN CN.FAMILY_NAME ELSE NULL END AS [LEGAL_FAMILY_NAME], CASE RDV.DOMAIN_CODE WHEN 'L' THEN CN.GIVEN_NAME ELSE NULL END AS [LEGAL_GIVEN_NAME], CASE RDV.DOMAIN_CODE WHEN 'L' THEN CN.MIDDLE_NAMES ELSE NULL END AS [LEGAL_MIDDLE_NAMES], CASE RDV.DOMAIN_CODE WHEN 'L' THEN CN.NAME_TITLE ELSE NULL END AS [LEGAL_NAME_TITLE], CASE RDV.DOMAIN_CODE WHEN 'P' THEN CN.FAMILY_NAME ELSE NULL END AS [PREFERRED_FAMILY_NAME], CASE RDV.DOMAIN_CODE WHEN 'P' THEN CN.GIVEN_NAME ELSE NULL END AS [PREFERRED_GIVEN_NAME], CASE RDV.DOMAIN_CODE WHEN 'P' THEN CN.MIDDLE_NAMES ELSE NULL END AS [PREFERRED_MIDDLE_NAMES], CASE RDV.DOMAIN_CODE WHEN 'P' THEN CN.NAME_TITLE ELSE NULL END AS [PREFERRED_NAME_TITLE] FROM dbo.CLIENT_NAME CN JOIN dbo.REFERENCE_DOMAIN_VALUE RDV ON CN.NAME_TYPE_CODE = RDV.DOMAIN_CODE AND RDV.REFERENCE_DOMAIN_ID = '7966'

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • VerifyError When Running jUnit Test on Android 1.6

    - by DKnowles
    Here's what I'm trying to run on Android 1.6: package com.healthlogger.test; public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includeAllPackagesUnderHere().build(); } } and: package com.healthlogger.test; public class RecordTest extends AndroidTestCase { /** * Ensures that the constructor will not take a null data tag. */ @Test(expected=AssertionFailedError.class) public void testNullDataTagInConstructor() { Record r = new Record(null, Calendar.getInstance(), "Data"); fail("Failed to catch null data tag."); } } The main project is HealthLogger. These are run from a separate test project (HealthLoggerTest). HealthLogger and jUnit4 are in HealthLoggerTest's build path. jUnit4 is also in HealthLogger's build path. The class "Record" is located in com.healthlogger. Commenting out the "@Test..." and "Record r..." lines allows this test to run. When they are uncommented, I get a VerifyError exception. I am severely blocked by this; why is it happening? EDIT: some info from logcat after the crash: E/AndroidRuntime( 3723): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3723): java.lang.VerifyError: com.healthlogger.test.RecordTest E/AndroidRuntime( 3723): at java.lang.Class.getDeclaredConstructors(Native Method) E/AndroidRuntime( 3723): at java.lang.Class.getConstructors(Class.java:507) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.hasValidConstructor(TestGrouping.java:226) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:215) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:211) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.select(TestGrouping.java:170) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.selectTestClasses(TestGrouping.java:160) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.testCaseClassesInPackage(TestGrouping.java:154) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.addPackagesRecursive(TestGrouping.java:115) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestSuiteBuilder.includePackages(TestSuiteBuilder.java:103) E/AndroidRuntime( 3723): at android.test.InstrumentationTestRunner.onCreate(InstrumentationTestRunner.java:321) E/AndroidRuntime( 3723): at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3848) E/AndroidRuntime( 3723): at android.app.ActivityThread.access$2800(ActivityThread.java:116) E/AndroidRuntime( 3723): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831) E/AndroidRuntime( 3723): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 3723): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3723): at android.app.ActivityThread.main(ActivityThread.java:4203) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) E/AndroidRuntime( 3723): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Programming test for ASP.NET C# developer job - Opinions please!

    - by Indy
    Hi all, We are hiring a .NET C# developer and I have developed a technical test for the candidates to complete. They have an hour and it has two parts, some knowledge based questions covering asp.net, C# and SQL and a small practical test. I'd appreciate feedback on the test, is it sufficient to test the programmers ability? What would you change if anything? Part One. What the are events fired as part of the ASP.NET Page lifecycle. What interesting things can you do at each? How does ViewState work and why is it either useful or bad? What is a common way to create web services in ASP.NET 2.0? What is the GAC? What is boxing? What is a delegate? The C# keyword .int. maps to which .NET type? Explain the difference between a Stored Procedure and a Trigger? What is an OUTER Join? What is @@IDENTITY? Part Two: You are provided with the Northwind Database and the attached DB relationship diagram. Please create a page which provides users with the following functionality. You don’t need to be too concerned with the presentation detail of the page. Select a customer from a list, and see all the orders placed by that customer. For the same customer, find all their orders which are Beverages and the quantity is more than 5. I was aware of setting the right balance of difficulty on this as there is an hour's test. I was able to complete the practical test in under 30 mins using SQLDatasource and the query designer in visual studio and the test questions, I am looking to see how they approach it logically and whether they use the tools available. Many thanks!

    Read the article

  • C Number to Text problem with ones and tens..

    - by Joegabb
    #include<stdio.h> #include<conio.h> main() { int ones,tens,ventoteen, myloop = 0; long num2,cents2,centeens,cents1,thousands,hundreds; double num; do{ printf("Enter a number: "); scanf("%lf",&num); if(num<=10000 || num>=0) { if (num==0) { printf("\t\tZero"); } num=(num*100); num2= (long)num; thousands=num2/100000; num2=num2%100000; hundreds=num2/10000; num2=num2%10000; if ((num2>=1100) || (num2<=1900)) { tens=0; ones=0; ventoteen=num2%1000; } else { tens=num2/1000; num2=num2%1000; ones=num2/100; num2=num2%100; } if((num2>=11) && (num2<=19)) { cents1=0; cents2=0; centeens=num2%10; } else { cents1=num2/10; num2=num2%10; cents2=num2/1; } if (thousands == 1) printf("One thousand "); else if (thousands == 2) printf("Two thousand "); else if (thousands == 3) printf("Three Thousand "); else if (thousands == 4) printf("Four thousand "); else if (thousands == 5) printf("Five Thousand "); else if (thousands == 6) printf("Six thousand "); else if (thousands == 7) printf("Seven Thousand "); else if (thousands == 8) printf("Eight thousand "); else if (thousands == 9) printf("Nine Thousand "); else {} if (hundreds == 1) printf("one hundred "); else if (hundreds == 2) printf("two hundred "); else if (hundreds == 3) printf("three hundred "); else if (hundreds == 4) printf("four hundred "); else if (hundreds == 5) printf("five hundred "); else if (hundreds == 6) printf("six hundred "); else if (hundreds == 7) printf("seven hundred "); else if (hundreds == 8) printf("eight hundred "); else if (hundreds == 9) printf("nine hundred "); else {} switch(ventoteen) { case 1: printf("eleven ");break; case 2: printf("twelve ");break; case 3: printf("thirteen ");break; case 4: printf("fourteen ");break; case 5: printf("fifteen ");break; case 6: printf("sixteen ");break; case 7: printf("seventeen ");break; case 8: printf("eighteen ");break; case 9: printf("nineteen ");break; } switch(tens) { case 1: printf("ten ");break; case 2: printf("twenty ");break; case 3: printf("thirty ");break; case 4: printf("forty ");break; case 5: printf("fifty ");break; case 6: printf("sixty ");break; case 7: printf("seventy ");break; case 8: printf("eighty ");break; case 9: printf("ninety ");break; } switch(ones) { case 1: printf("one ");break; case 2: printf("two ");break; case 3: printf("three ");break; case 4: printf("four ");break; case 5: printf("five ");break; case 6: printf("six ");break; case 7: printf("seven ");break; case 8: printf("eight ");break; case 9: printf("nine ");break; } switch(cents1) { case 1: printf("and ten centavos ");break; case 2: printf("and twenty centavos ");break; case 3: printf("and thirty centavos ");break; case 4: printf("and fourty centavos ");break; case 5: printf("and fifty centavos ");break; case 6: printf("and sixty centavos ");break; case 7: printf("and seventy centavos ");break; case 8: printf("and eighty centavos ");break; case 9: printf("and ninety centavos ");break; } switch(centeens) { case 1: printf("and eleven centavos ");break; case 2: printf("and twelve centavos ");break; case 3: printf("and thirteen centavos ");break; case 4: printf("and fourteen centavos ");break; case 5: printf("and fifteen centavos ");break; case 6: printf("and sixteen centavos ");break; case 7: printf("and seventeen centavos ");break; case 8: printf("and eighteen centavos ");break; case 9: printf("and nineteen centavos ");break; } switch(cents2) { case 1: printf("and one centavos ");break; case 2: printf("and two centavos ");break; case 3: printf("and three centavos ");break; case 4: printf("and four centavos ");break; case 5: printf("and five centavos ");break; case 6: printf("and six centavos ");break; case 7: printf("and seven centavos ");break; case 8: printf("and eight centavos ");break; case 9: printf("and nine centavos ");break; } } getch(); }while(myloop == 0); return 0; } my code is working fine but the problem is when i input 1 - 90 nothing appears but when i input 100 the output would be fine and that is "One Hundred" and so as 1000 the output would be "One Thousand". thanks for the help..

    Read the article

  • lower-case 'c' key not working in bash

    - by gavin
    This is a bit of a strange one. I'm running Ubuntu 12.04. It's been working well but today, I ran into a hell of strange phenomenon. I can no longer type a lower-case 'c' in bash. At first I thought it was a misconfiguration for the gnome terminal but I tried both a stock xterm and directly at the console (ctrl+alt+F1) and the issue was the same. I can type an upper-case C without any difficulty and I can type lower-case 'c' in any other terminal based program (vim, bash, less, etc.). The lower 'c' also works if I jump into plain old sh. I looked at all the configuration files I know of and haven't found anything incriminating in there. I suspect it's not going to be that simple anyway because if I run bash with the '--norc' option from within sh, the problem remains. I don't know what else to check. In fact, if I wanted to cause this problem on a given machine, I have no idea how it could be done. Total mystery.

    Read the article

  • App Engine Hangout - chat with an App Engine Software Engineer in Test

    App Engine Hangout - chat with an App Engine Software Engineer in Test We'll be chatting with Robert Schuppenies, who is an App Engine Software Engineer in Test. He'll describe a bit about what he does, and talk about/demo some App Engine test frameworks, like the testbed module, code.google.com and code.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • ORACLE UK TECHNOLOGY “TEST FEST”

    - by mseika
    ORACLE UK TECHNOLOGY “TEST FEST” Join us at the UKOUG Conference at the ICC in Birmingham and Take your OPN Implementation Specialist Exam for Free! 3-5 December 2012, ICC Birmingham (UK) Dear Oracle Partner,** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December 2012.This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) REGISTER - 3 December Afternoon Session - 2:00pm REGISTER - 4 December Morning Session 10:00am REGISTER - 4 December Afternoon Session - 2:00pm REGISTER - 5 December Morning Session - 10:00am Price: FREE Address:The Drawing Room Hyatt Regency Hotel Birmingham 2 Bridge Street Birmingham BI 2JZ 3 - 5 December 2012 Which Implementation Specialist Exams are available to take?Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile HERE and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found HERE. Register for one of the 4 sessions using the registration links at the top of this page You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice?For more information about the tests and Get Specialized programme, please contact: [email protected] issues with your profile or any other OPN-related problems, please contact our Oracle Partner Business Centre: [email protected] or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • Test Driven Development Code Order

    - by Bobby Kostadinov
    I am developing my first project using test driven development. I am using Zend Framework and PHPUnit. Currently my project is at 100% code coverage but I am not sure I understand in what order I am supposed to write my code. Am I supposed to write my test FIRST with what my objects are expected to do or write my objects and then test them? Ive been working on completing a controller/model and then writing at test for it but I am not sure this is what TDD is about? Any advice? For example, I wrote my Auth plugin and my Auth controller and tested that they work properly in my browser, and then I sat down to write the tests for them, which proved that there were some logical errors in the code that did work in the browser.

    Read the article

  • If you should only have one assertion per test; how to test multiple inputs?

    - by speg
    I'm trying to build up some test cases, and have read that you should try and limit the number of assertions per test case. So my question is, what is the best way to go about testing a function w/ multiple inputs. For example, I have a function that parses a string from the user and returns the number of minutes. The string can be in the form "5w6h2d1m", where w, h, d, m correspond to the number of weeks, hours, days, and minutes. If I wanted to follow the '1 assertion per test rule' I'd have to make multiple tests for each variation of input? That seems silly so instead I just have something like: self.assertEqual(parse_date('5m'), 5) self.assertEqual(parse_date('5h'), 300) self.assertEqual(parse_date('5d') ,7200) self.assertEqual(parse_date('1d4h20m'), 1700) In the one test case. Is there a better way?

    Read the article

  • Audio is working, but the speaker test doesn't work

    - by Pacquo
    I've installed Ubuntu 12.04 LTS from a minimal cd on a netbook (Asus 1001 PXD). I've installed the ubuntu-desktop package using the --no-install-recommends option. Everything works fine, except the "sound test" for headphones or analog speakers. Clicking on the "test" buttons (front left and front right) I don't hear any sound. Despite this, the audio is working properly. I've checked the audio levels with alsamixer; I have also checked that the test sounds actually exist in /usr/share/sounds/alsa. I tried an installation of Ubuntu 12.04 LTS made with the desktop-cd, and in this case the speaker test works properly. I suppose, therefore, that the problem could depend on the lack of a package, but I have not identified which one.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >