Search Results

Search found 7245 results on 290 pages for 'meta tests'.

Page 8/290 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • No date/time shown before my page in Google search results

    - by Ruut
    I know that by changing the meta description of my webpage, I can control the texts shown by Google in the search results. However I do not know how I can control the text shown just before the search results, for example the date when the page was last updated. Which meta tag to use to accomplish this? UPDATE: My webpage is automatically updated on a weekly basis on irregular intervals by a cronjob which makes changes to the MySQL database which holds the content of my webpages. So the question is what (meta) info to add to my page.

    Read the article

  • Temporary website redirect: 3xx or php/meta?

    - by Damien Pirsy
    Hi, I run a (small) news website which has also a forum in a subfolder of the root. I'm planning to give the site a facelift and a code restructuration, so I wanted to put some redirect on the home page that will point directly to forum's index (www.mysite.com -- www.mysite.com/forum) while I tinker with it. And that, given the little free time I have, will take no less than a couple of month. Being a news site I'm pretty sure that would affect it's overall ranking, but I need to do it, so: which is the best way to redirect? I pondered and read here and there about the different means, but I couldn't figure out which is worst for SEO. Do I use a 302 redirect or use "Location:newurl" in page headers using php? Or I just put a meta tag in the html page (or a javascript, what's better). Sorry but I'm not really into these things, I may have said something silly, I know... Thanks

    Read the article

  • Google showing meta descriptions from other pages in the SERPs

    - by ojek
    Recently I added some content to my website and submitted a sitemap files to Google. Now that Google has indexed those pages, I discovered that some of words and sentences that are listed in Google that lead to my website are having their meta descriptions somehow mixed up. Here is how it works: After I put a sentence on Google to check for my website ranking, I can see a page title in the results of page1, a link to page1, and a description from page2. Since my website is a forum, if Google mixes the links of threads, it leads my users to different kind of material that they were looking for. Is there anything I can do about it?

    Read the article

  • Length of Page Title, URL, Meta Description and total number of links on a page

    - by MJWadmin
    We've been examining a number of different SEO tools recently. Several of these tell us that some of our page title's, urls and meta descriptions are too long. We've also been told that some of our pages have too many links on them. I guess our first question is - is any of that feedback true! Can URL's etc actually be too long and if so how much does this affect ranking? Secondly can you have too many links on a page and if so, how many is too many? Thanks in advance...

    Read the article

  • Have windows key (meta) trigger Gnome-do instead of Unity Dash (12.04)

    - by Jason O'Neil
    On my laptop the Unity Dash often launches really slowly. I might press the Windows button, and sometimes it will take up to 15 seconds to open, and the system becomes unresponsive during this time. This is especially likely if I haven't opened the launcher in a few hours. I have Gnome-Do installed, and I currently launch it with "Ctrl + Space", and it launches instantly and is very responsive. I would like to swap my shortcut keys, so Meta (the Windows key) launches Gnome Do, but I can't figure out where to change the keyboard shortcut for the dash. Any clues?

    Read the article

  • Do I need the meta-key for vim?

    - by Riyaah
    I'm looking in to learning emacs or vim. I started out with emacs but found the need for a meta key to be a hassle, especially since I have a non-english keyboard layout on my macbook. So far I haven't seen any references to meta in vim, so my question is: Can I live without meta in vim? If so that'll settle the vim vs. emacs question for me, otherwise I'll just have to learn to live with some workaround.

    Read the article

  • Best Way to Handle Meta Information in a SQL Database

    - by danielhanly.com
    I've got a database where I want to store user information and user_meta information. The reason behind setting it up in this way was because the user_meta side may change over time and I would like to do this without disrupting the master user table. If possible, I would like some advice on how to best set up this meta data table. I can either set it as below: +----+---------+----------+--------------------+ | id | user_id | key | value | +----+---------+----------+--------------------+ | 1 | 1 | email | [email protected] | | 2 | 1 | name | user name | | 3 | 1 | address | test address | ... Or, I can set it as below: +----+---------+--------------------+--------------------+--------------+ | id | user_id | email | name | address | +----+---------+--------------------+--------------------+--------------+ | 1 | 1 | [email protected] | user name | test address | Obviously, the top verison is more flexible, but the bottom version is space saving and perhaps more efficient, returning all the data as a single record. Which is the best way to go about this? Or, am I going about this completely wrong and there's another way I've not thought of?

    Read the article

  • How do I run all my PHPUnit tests?

    - by JJ
    I have script called Script.php and tests for it in Tests/Script.php, but when I run phpunit Tests it does not execute any tests in my test file. How do I run all my tests with phpunit? PHPUnit 3.3.17, PHP 5.2.6-3ubuntu4.2, latest Ubuntu Output: $ phpunit Tests PHPUnit 3.3.17 by Sebastian Bergmann. Time: 0 seconds OK (0 tests, 0 assertions) And here are my script and test files: Script.php <?php function returnsTrue() { return TRUE; } ?> Tests/Script.php <?php require_once 'PHPUnit/Framework.php'; require_once 'Script.php' class TestingOne extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } class TestingTwo extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } ?>

    Read the article

  • Unit tests - The benefit from unit tests with contract changes?

    - by Stefan Hendriks
    Recently I had an interesting discussion with a colleague about unit tests. We where discussing when maintaining unit tests became less productive, when your contracts change. Perhaps anyone can enlight me how to approach this problem. Let me elaborate: So lets say there is a class which does some nifty calculations. The contract says that it should calculate a number, or it returns -1 when it fails for some reason. I have contract tests who test that. And in all my other tests I stub this nifty calculator thingy. So now I change the contract, whenever it cannot calculate it will throw a CannotCalculateException. My contract tests will fail, and I will fix them accordingly. But, all my mocked/stubbed objects will still use the old contract rules. These tests will succeed, while they should not! The question that rises, is that with this faith in unit testing, how much faith can be placed in such changes... The unit tests succeed, but bugs will occur when testing the application. The tests using this calculator will need to be fixed, which costs time and may even be stubbed/mocked a lot of times... How do you think about this case? I never thought about it thourougly. In my opinion, these changes to unit tests would be acceptable. If I do not use unit tests, I would also see such bugs arise within test phase (by testers). Yet I am not confident enough to point out what will cost more time (or less). Any thoughts?

    Read the article

  • Any way to separate unit tests from integration tests in VS2008?

    - by AngryHacker
    I have a project full of tests, unit and integration alike. Integration tests require that a pretty large database be present, so it's difficult to make it a part of the build process simply because of the time that it takes to re-initialize the database. Is there a way to somehow separate unit tests from integration tests and have the build server just run the unit tests? I see that there is an Ordered Unit test in VS2008, which allows you to pick and choose tests, but I can't make it just execute alone, without all the others. Is there a trick that I am missing? Or perhaps I could adorn the unit tests with an attribute? What are some of the approaches people are using? P.S. I know I could use mocking for integration tests (just to make them go faster) but then it wouldn't be a true integration test.

    Read the article

  • would it be bad to put <span> tags within the <head>, for grouping meta data in schema.org format?

    - by hdavis84
    Alright, I'm currently practicing schema.org microdata, and trying to find the best route for every site I build. I have found that i can piggyback itemprops on open graph meta tags. I would like to piggyback more itemprops on opengraph meta tags. However, schema.org requires you to change itemtypes to define all aspects of a "thing". Say I'm defining a LocalBusiness. Open graph has street address, locality, and region i'd like to piggyback on. I'd have to do something like: <html lang="en" itemscope itemtype="http://schema.org/LocalBusiness"> <head> ... <meta itemprop="name" content="Business Name" /> <meta property="og:url" itemprop="url" content="http://example.com" /> <meta property="og:image" itemprop="image" content="http://example.com/logo.png" /> <span itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <meta property="og:street-address" itemprop="streetAddress" content="1234 Amazing Rd." /> <meta property="og:locality" itemprop="addressLocality" content="Greenfield" /> <meta property="og:region" itemprop="addressRegion" content="IN" /> </span> </head> Although there's more that can be added in, this is enough of an example to show what I'm trying to achieve. I've searched the web to see if it is an issue to use spans in the head or not, because I don't want invalid markup. I know I can mark up the address information in the body of the pages, but the route above would be more efficient. Does anyone have an answer for this?

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Adding unit tests to a legacy, plain C project

    - by Groo
    The title says it all. My company is reusing a legacy firmware project for a microcontroller device, written completely in plain C. There are parts which are obviously wrong and need changing, and coming from a C#/TDD background I don't like the idea of randomly refactoring stuff with no tests to assure us that functionality remains unchanged. Also, I've seen that hard to find bugs were introduced in many occasions through slightest changes (which is something which I believe would be fixed if regression testing was used). A lot of care needs to be taken to avoid these mistakes: it's hard to track a bunch of globals around the code. To summarize: How do you add unit tests to existing tightly coupled code before refactoring? What tools do you recommend? (less important, but still nice to know) I am not directly involved in writing this code (my responsibility is an app which will interact with the device in various ways), but it would be bad if good programming principles were left behind if there was a chance they could be used.

    Read the article

  • RSpec + Selenium tests for .NET on Windows

    - by John
    I'm a Rails developer doing TDD on a Mac with RSpec, Capybara and Selenium webdriver. Now I have been asked by my company to use this approach for a .NET on Windows environment. What is the best way of doing this? I could just install Ruby and use RSPEC, Capybara and Selenium webdriver for integration testing. But what about unit tests? I also looked at NSpec, but I'm not sure if I can combine that with Capybara or Selenium for integration tests. What would be a good approach here?

    Read the article

  • Run Tests in Folder

    - by Tomas Mysik
    Hi all, today we would like to show you another minor improvement we have prepared for NetBeans 7.2. Today, let's talk a little bit about testing. This minor improvement will be useful especially for users who have a lot of unit tests (it means all of us, right? ;) - just right click on any folder underneath Test Files node and you will notice: The result is as expected - all the tests from the given folder are run: That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent PHPUnit).

    Read the article

  • Visual Studio Load Tests Virtual Users Simulation

    - by Eldar
    Hello, I'm currently working on writing a load testing application that takes advantage of Load Test using Visual Studio 2010. The load test will simulate 20 users on the same machine, and I need some data to be shared in-memory between all simulated users. I was suprised I couldn't find documentation answering the following question: What seperates each virtual user's running context from the other? Does each virtual user runs the tests in its own process? Maybe in its own app domain? Or just on its own thread? I need to know because if each user is running tests in its own process then all the in-memory cache isn't shared and is created for each user instead of one time for all of them, which is bad for me.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Run unittest in a Class

    - by chrissygormley
    Hello, I have a test suite to perform smoke tests. I have all my script stored in various classes but when I try and run the test suite I can't seem to get it working if it is in a class. The code is below: (a class to call the tests) from alltests import SmokeTests class CallTests(SmokeTests): def integration(self): self.suite() if __name__ == '__main__': run = CallTests() run.integration() And the test suite: class SmokeTests(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('external_sanity', 'internal_sanity') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests if __name__ == '__main__': unittest.main(defaultTest='suite') So I can see how to call a normal function defined but I'm finding it difficult calling in the suite. In one of the tests the suite is set up like so: class InternalSanityTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeInternalSanityTestSuite(): suite = unittest.TestSuite() suite.addTest(TestInternalSanity("BasicInternalSanity")) suite.addTest(TestInternalSanity("VerifyInternalSanityTestFail")) return suite def suite(): return unittest.makeSuite(TestInternalSanity) If I have def suite() inside the class SmokeTests the script executes but the tests don't run but if I remove the class the tests run. I run this as a script and call in variables into the tests. I do not want to have to run the tests by os.system('python tests.py'). I was hoping to call the tests through the class I have like any other function. This need's to be called from a class as the script that I'm calling it from is Object Oriented. If anyone can get the code to be run using Call Tests I would appreciate it alot. Thanks for any help in advance.

    Read the article

  • How exactly do MbUnit's [Parallelizable] and DegreeOfParallelism work?

    - by BenA
    I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something! I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution. To that end, I put the following in the AssemblyInfo.cs: [assembly: DegreeOfParallelism(8)] [assembly: Parallelizable(TestScope.All)] My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven. However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be. Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner? N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something. EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available. EDIT 2: To further clarify, the structure of my assembly is as follows: assembly - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute)

    Read the article

  • System Variables, Stored Procedures or Functions for Meta Data

    - by BuckWoody
    Whenever you want to know something about SQL Server’s configuration, whether that’s the Instance itself or a database, you have a few options. If you want to know “dynamic” data, such as how much memory or CPU is consumed or what a particular query is doing, you should be using the Dynamic Management Views (DMVs) that you can read about here: http://msdn.microsoft.com/en-us/library/ms188754.aspx  But if you’re looking for how much memory is installed on the server, the version of the Instance, the drive letters of the backups and so on, you have other choices. The first of these are system variables. You access these with a SELECT statement, and they are useful when you need a discrete value for use, say in another query or to put into a table. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms173823.aspx You also have a few stored procedures you can use. These often bring back a lot more data, pre-formatted for the screen. You access these with the EXECUTE syntax. It is a bit more difficult to take the data they return and get a single value or place the results in another table, but it is possible. You can read more about those here: http://msdn.microsoft.com/en-us/library/ms187961.aspx Yet another option is to use a system function, which you access with a SELECT statement, which also brings back a discrete value that you can use in a test or to place in another table. You can read about those here: http://msdn.microsoft.com/en-us/library/ms187812.aspx  By the way, many of these constructs simply query from tables in the master or msdb databases for the Instance or the system tables in a user database. You can get much of the information there as well, and there are even system views in each database to show you the meta-data dealing with structure – more on that here: http://msdn.microsoft.com/en-us/library/ms186778.aspx  Some of these choices are the only way to get at a certain piece of data. But others overlap – you can use one or the other, they both come back with the same data. So, like many Microsoft products, you have multiple ways to do the same thing. And that’s OK – just research what each is used for and how it’s intended to be used, and you’ll be able to select (pun intended) the right choice. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why No android.content.SyncAdapter meta-data registering sync-adapter?

    - by mobibob
    I am following the SampleSyncAdapter and upon startup, it appears that my SyncAdapter is not configured correctly. It reports an error trying to load its meta-data. How can I isolate the problem? You can see the other accounts in the system that register correctly. Logcat: 12-21 17:10:50.667 W/PackageManager( 121): Unable to load service info ResolveInfo{4605dcd0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.667 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.667 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.667 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570) 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.skype.contacts.sync, packageName=com.skype.raider 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.twitter.android.auth.login, packageName=com.twitter.android 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.example.android.samplesync, packageName=com.example.android.samplesync 12-21 17:10:50.747 W/PackageManager( 121): Unable to load service info ResolveInfo{460504b0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.747 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.747 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.747 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570)

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • How can I decide what to test manually, and what to trust to automated tests?

    - by bhazzard
    We have a ton of developers and only a few QA folks. The developers have been getting more involved in qa throughout the development process by writing automated tests, but our QA practices are mostly manual. What I'd love is if our development practices were BDD and TDD and we grew a robust test suite. The question is: While building such a testing suite, how can we decide what we can trust to the tests, and what we should continue testing manually?

    Read the article

  • Rescue overdue offshore projects and convince management to use automated tests

    - by oazabir
    I have published two articles on codeproject recently. One is a story where an offshore project was two months overdue, my friend who runs it was paying the team from his own pocket and he was drowning in ever increasing number of change requests and how we brainstormed together to come out of that situation. Tips and Tricks to rescue overdue projects Next one is about convincing management to go for automated test and give developers extra time per sprint, at the cost of reduced productivity for couple of sprints. It’s hard to negotiate this with even dev leads, let alone managers. Whenever you tell them - there’s going to be less features/bug fixes delivered for next 3 or 4 sprints because we want to automate the tests and reduce manual QA effort; everyone gets furious and kicks you out of the meeting. Especially in a startup where every sprint is jam packed with new features and priority bug fixes to satisfy various stakeholders, including the VCs, it’s very hard to communicate the benefits of automated tests across the board. Let me tell you of a story of one of my startups where I had the pleasure to argue on this and came out victorious. How to convince developers and management to use automated test instead of manual test If you like these, please vote for me!

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >