Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 247/1421 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • Visual Studio Folder Structure

    - by nick
    I am not sure how this works. I am using Visual Studio 2008 and I created a Class Library (say the name is Test). I also selected the option to create a folder for the solution. Following is the directory structure I get: Test - Test - bin - Debug - obj - Debug - Properties - AassemblyInfo.cs - Test.cs - Test.csproj - Test.sln - Test.suo This is default and I have no problems running my code this way. My querry is I see other solutions (class libraries) created in the Subversion by others before have a different structure. The structure for that is as follows: Test - .svn - lib - <<Reference 1>> - <<Reference 2>> - .... - <<Reference N>> - src - bin - Debug - obj - Debug - Properties - AassemblyInfo.cs - Test.cs - Test.csproj - Test.sln - Test.suo My query is how to create this structure? All the references to other projects are maintained in lib folder and source code is maintained in src folder. This is not the case happening with me. When I open the solution in Visual Studio, I cannot see any such folder like lib or src. It shows the same way as mine. Kindly help and forgive me for being so elaborative. Thanks

    Read the article

  • Why can't I create a templated sublcass of System::Collections::Generic::IEnumerable<T>?

    - by fiirhok
    I want to create a generic IEnumerable implementation, to make it easier to wrap some native C++ classes. When I try to create the implementation using a template parameter as the parameter to IEnumerable, I get an error. Here's a simple version of what I came up with that demonstrates my problem: ref class A {}; template<class B> ref class Test : public System::Collections::Generic::IEnumerable<B^> // error C3225... {}; void test() { Test<A> ^a = gcnew Test<A>(); } On the indicated line, I get this error: error C3225: generic type argument for 'T' cannot be 'B ^', it must be a value type or a handle to a reference type If I use a different parent class, I don't see the problem: template<class P> ref class Parent {}; ref class A {}; template<class B> ref class Test : public Parent<B^> // no problem here {}; void test() { Test<A> ^a = gcnew Test<A>(); } I can work around it by adding another template parameter to the implementation type: ref class A {}; template<class B, class Enumerable> ref class Test : public Enumerable {}; void test() { using namespace System::Collections::Generic; Test<A, IEnumerable<A^>> ^a = gcnew Test<A, IEnumerable<A^>>(); } But this seems messy to me. Also, I'd just like to understand what's going on here - why doesn't the first way work?

    Read the article

  • Visual Studio 2010 Service Pack 1 Released

    - by krislankford
    The VS 2010 SP 1 release was simultaneous to the release of TFS 2010 SP1 and includes support for the Project Server Integration Feature Pack and updates to .NET Framework 4.0. The complete Visual Studio SP1 list including Test and Lab Manager: http://support.microsoft.com/kb/983509 The release addresses some of the most requested features from customers of Visual Studio 2010 like better help support IntelliTrace support for 64bit and SharePoint Silverlight 4 Tools in the box unit testing support on .NET 3.5 a new performance wizard for Silverlight Another major addition is the announcement of Unlimited Load Testing for Visual Studio 2010 Ultimate with MSDN Subscribers! The benefits of Visual Studio 2010 Load Test Feature Pack and useful links: Improved Overall Software Quality through Early Lifecycle Performance Testing: Lets you stress test your application early and throughout its development lifecycle with realistically modeled simulated load. By integrating performance validations early into your applications, you can ensure that your solution copes with real-world demands and behaves in a predictable manner, effectively increasing overall software quality. Higher Productivity and Reduced TCO with the Ability to Scale without Incremental Costs: Development teams no longer have to purchase Visual Studio Load Test Virtual User Pack 2010. Download the Visual Studio 2010 Load Test Feature Pack Deployment Guide Get started with stress and performance testing with Visual Studio 2010 Ultimate: Quality Solutions Best Practice: Enabling Performance and Stress Testing throughout the Application Lifecycle Hands-On-Lab: Introduction to Load Testing with ASP.NET Profile in Visual Studio 2010 How-Do-I videos: Use ASP.NET Profiler in Load Tests Use Network Emulation in Load Tests VHD/VPC walkthrough: Getting Started with Load and Performance Testing Best Practice guidance: Visual Studio Performance Testing Quick Reference Guide

    Read the article

  • Part 2: The Customization Lifecycle

    - by volker.eckardt(at)oracle.com
    To understand the challenges when working with Customizations better, please allow me to explain my understanding from the Customization Lifecycle.  The starting point is the functional GAP list. Any GAP can lead to a customization (but not have to). The decision is driven by priority, gain, costs, future functionality, accepted workarounds etc. Let's assume the customization has been accepted as such - including estimation. (Otherwise this blog would not have any value)Now the customization life-cycle starts and could look like this:-    Functional specification-    Technical specification-    Technical development-    Functional setup-    Module Test-    System Test-    Integration Test (if required)-    Acceptance Test-    Production mode-    Usage-    10 x Rework-    10 x Retest -    2 x Upgrade-    2 x Upgrade Test-    Usage-    10 x Rework-    10 x Retest -    1 x Upgrade-    1 x Upgrade Test-    Usage-    Review for Retirement-    Accepted Retirement-    De-installationWhat I like to highlight herewith is that any material and documentation you create upfront or during the first phases will usually be used multiple times, partial or complete, will be enhanced, reviewed, retested. The better the quality right from the beginning is, the better we can perform the next steps.What I see very often is the wish to remove a customization, our customers are upgrading and they like to get at least some of the customizations replaced with standard functionality. To be able to support this process best, the customization documentation should contain at least the following key information: What is/are the business process(es) where this customization is used or linked to?Who was involved in the different customization phases?What are the objects comprising the customization?What is the setup necessary for the customization?What setup comes with the customization, what has to be done via other tools or manually?What are the test steps and test results (in all test areas)?What are linked customizations? What is the customization complexity?How is this customization classified?Which technologies were used?How many days were needed to create/test/upgrade the customization?Etc.If all this is available, a replacement / retirement can be done much more efficient and precise, or an estimation and upgrade itself can be executed with much better support.In the following blog entries I will explain in more detail why we suggest tracking such information, by whom this task shall be done and how.Volker Eckardt

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • The number of soft 404 errors is increasing because of redirects to the home page

    - by Stevie G
    I have an increase in soft 404 errors. Using Apache in my .htaccess file I have: Redirect 301 /test.html? /page/pop/test Redirect 301 /about.html? /about I have also tried: Redirect 301 http://www.example.co.za/test.html? http://www.example.co.za/services/test however whenever I go to: http://www.example.co.za/test.html http://www.example.co.za/about.html it just redirects to the home page I also have: RewriteRule ^.*$ index.php [NC,L] in .htaccess

    Read the article

  • Alternatives for 'egrep -o "success|error|fail" <filename> | sort | uniq -c'

    - by Wolfy
    I sometime need to check some logs and I do this with this command: egrep -o "success|error|fail" <filename> | sort | uniq -c Sample input: test error on line 10 test connect success test insert success test started at 00:00 test delete fail Sample output: 1 error 1 fail 2 success I would like to know if someone knows a way to do this with a shorter command? Before you ask why I would like to do this with an different command... No special reason, I'm just curious :)

    Read the article

  • Google Analytics - include filter not working

    - by gerl
    I just added an include filter this morning in my domain (test.org). I have: Custom Filter Include Request URI ^/test-a/46212$|^/test-a/46212|^/test-a/46315 Now after I go to Content Site Content All Pages, I see stats for other pages that I didn't include in my filter. For example I see /somethingelse. I only want to see stats for /test-a/46212 and whatever else in my filter. Please let me know what I'm doing wrong.

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • Poor performance on IIS7, only on Windows Server 2008, fine on Windows 7?

    - by user32005
    Hi, I'm new to IIS7 but have experience with other versions. I've been working on an application that works great in a dev environment (as always) but when I push it to a windows server 2008/IIS7 box performance takes a noticeable hit. The dev environment is Windows7/IIS7. The configuration in IIS is the same on the dev box as the server. I've tried all sorts of things to try and find a reason for this but I cant come to any conclusion. I've ruled out database problems on the live box as all data is cached after the first request. I've confirmed this to be true and made sure there is no additional database traffic. I've ruled out network issues with a combination of monitoring requests with fiddler and local debugging on the server. Whenever the code runs on the server there seems to be a performance issue. The server: Intel Core 2 Quad CPU Q6600 @ 2.40ghz with 2gb RAM. I know this is not fantastic but I was expecting it to at least perform as well as my dev environment (which is running much more on a lower spec). The CPU using peaks at under 60%, and memory usage is less than half of the available. I've enabled failed request tracing and most of the time is spent in a custom HttpModule, this module works to handle every request, I cant get any more detail as to what within the module may be causing the problem. Any ideas, I've been pulling my hair out for days now. Thanks

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • Should I use nginx exclusively, or have it as a proxy to Tomcat (performance related)?

    - by Kevin
    I've planned to create a website that'll be pretty heavy on dynamic content, and want to know what would be the wisest choice for part of my webstack. Right now I'm trying to decide whether I should develop upon nginx, using PHP to deliver the dynamic content, or use nginx as a proxy to Tomcat and use servlets to deliver the dynamic content. I have a good amount of experience with Java, JSP, and servlets, so that's a plus right off the bat. Also, since it is a compiled language, it will execute faster than PHP (it is implied here that Java is around 37x faster than PHP) , and will create the web pages faster. I have no experience with PHP, however i'm under the impression that it is easy to pick up. It's slower than Java, but since the client will only be communicating with nginx, I'm thinking that serving the dynamically created web pages to the client will be faster this way. Considering these things, i'd like to know: Are my assumptions correct? Where does the bottleneck occur: creating pages or serving them back to the client? Will proxying Tomcat with nginx give me any of nginx performance benefits if I'm going to be using Tomcat to generate the dynamic content (keeping in mind my site is going to be heavy in this aspect)? I don't mind learning PHP if, in the end, its going to give me the best performance. I just want to know what would be the best choice from that standpoint.

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance?

    - by Varun
    Question: OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance? Answer: OBI Caching only speeds up what has been seen before. An In-memory data structure generated by the summary advisor is optimized to provide maximum value by accounting for the expected broad usage and drilldowns. It is possible to adapt the in-memory data to seasonality by running the summary advisor on specific workloads. Moreover, the in-memory data is created in an analytic database providing maximum performance for the large amount of memory available.

    Read the article

  • Running each JUnit test in a separate JVM in Eclipse?

    - by HenryR
    I have a project with nearly 500 individual tests in around 200 test classes. Some of these tests don't do a great job of tearing down their own state after they're finished, and in Eclipse this results in some tests failing. The tests all pass when running the test suite from the command line via Ant. Can I enable 'test isolation' somehow in Eclipse? I don't mind if it takes longer to run. Long term, I will clean up the misbehaving tests, but in the short term I'd like to get the tests working.

    Read the article

  • Can I override task :environment in test_helper.rb to test rake tasks?

    - by Michael Barton
    I have a series of rake tasks in a Rakefile which I'd like to test as part of my specs etc. Each task is defined in the form: task :do_somthing => :environment do # Do something with the database here end Where the :environment task sets up an ActiveRecord/DataMapper database connection and classes. I'm not using this as part of Rails but I have a series of tests which I like to run as part of BDD. This snippet illustrates how I'm trying to test the rake tasks. def setup @rake = Rake::Application.new Rake.application = @rake load File.dirname(__FILE__) + '/../../tasks/do_something.rake' end should "import data" do @rake["do_something"].invoke assert something_in_the_database end So my request for help - is it possible to over-ride the :environment task in my test_helper.rb file so I my rake testing interacts with the my test database, rather than production? I've tried redefining the task in the helper file, but this doesn't work. Any help for a solution would be great, as I've been stuck on this for the past week.

    Read the article

  • grails gsp test evaluates to false, but block is still rendered. Why?

    - by ?????
    I'm baffled with Grails test operator. This expression: <g:if test="${!(preferences.displayOption.equals('ANA') || preferences.displayOption.equals('FLOP'))} "> ${!(preferences.displayOption.equals('ANA') || preferences.displayOption.equals('FLOP'))} </g:if> prints false How can that be? I'm printing the exact same condition I'm testing for! even though I'm certain the test condition evaluates to 'false' because it prints false in the very next line, the statements inside the g:if are being rendered. Anu ideas as to what's going on.

    Read the article

  • From a Perl test file, how do I check the contents of a file?

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests => 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • Any open source back test engine using bloomberg tick-by-tick data?

    - by Ian Xu
    I have a back test on index futures to do. I've finished the test on 1-minute OHLC data and the result is OK. Further I want to opt our tick-by-tick data downloaded from Bloomberg. I have browsed the internet and found that several trading platforms are available for such function but Bloomberg is not in the data source providers list. So I think these are not suitable for my case. I'm wondering whether there is any open-source engine that I may embed to finish the test?

    Read the article

  • How do I test ActionFilterAttributes that work with ModelState?

    - by Tomas Lycken
    As suggested by (among others) Kazi Manzur Rashid in this blog post, I am using ActionFilterAttributes to transfer model state from one request to another when redirecting. However, I find myself unable to write a unit test that test the behavior of these attributes. As an example, this what I want the test for the ImportModelStateAttribute to do: Setup the filterContext so that TempData[myKey] contains some fake "exported" ModelState (that is, a ModelStateDictionary I create myself, and add one error to) Make ModelState contain one model error. Call OnActionExecuting. Verify the two dictionaries are merged, and ModelState now contains both errors. I'm at a loss already on the second step.

    Read the article

  • How doe we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectations from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

  • Is a class that is hard to unit test badly designed?

    - by Extrakun
    I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons: Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities). Requires a lot of other external classes just to get the class I am testing to its initial state. On the whole, there don't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed. Does this reason holds water? What are your experiences?

    Read the article

  • JPA DAO integration test not throwing exception when duplicate object saved?

    - by HDave
    I am in the process of unit testing a DAO built with Spring/JPA and Hibernate as the provider. Prior to running the test, DBUnit inserted a User record with username "poweruser" -- username is the primary key in the users table. Here is the integration test method: @Test @ExpectedException(EntityExistsException.class) public void save_UserTestDataSaveUserWithPreExistingId_EntityExistsException() { User newUser = new UserImpl("poweruser"); newUser.setEmail("[email protected]"); newUser.setFirstName("New"); newUser.setLastName("User"); newUser.setPassword("secret"); dao.persist(newUser); } I have verified that the record is in the database at the start of this method. Not sure if this is relevant, but if I do a dao.flush() at the end of this method I get the following exception: javax.persistence.PersistenceException: org.hibernate.exception.ConstraintViolationException: Could not execute JDBC batch update

    Read the article

  • How do we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectation from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >