Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 62/1421 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How to remove the graphical user interface?

    - by Praveen Kumar
    Ok my question is that, I want to run a heavy application, on a Virtual Machine (VirtualBox) with just 2 GB RAM (Windows 7 32Bit Host has 4 GB, 3.5 GB effective). Initially I thought of installing Ubuntu Server 12.04.1, which doesn't come with a GUI, so I thought it would be efficient in performance, but I have only Ubuntu 12.04 Desktop. My question is, is it possible to remove the GUI parts in Ubuntu 12.04 Desktop (Not Server), keeping only the core OS, after installation in a virtual machine? Or, is there anyway to improve the performance of the OS? If you need more information, I am ready to provide. I don't want the GUI or anything, even a small terminal window is fine for me, I can access files through FTP.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • SQL SERVER Improve Performance by Reducing IO Creating Covered Index

    This blog post is in the response of the T-SQL Tuesday #004: IO by Mike Walsh. The subject of this month is IO. Here is my quick blog post on how Cover Index can Improve Performance by Reducing IO.Let us kick off this post with disclaimers about Index. Index is a very complex subject and [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What are the benefits of archiving?

    - by HappyDeveloper
    I always see sites that only keeps fresh content on the home or subsections, and the rest of the content is kept in a separate section called 'archive'. Recently I have also heard that NoSQL DB's like MongoDB are good for archiving (which makes me think this is related to performance) So why do sites archive their content? What's the benefit over say, a simple paginator through which you could reach all the content? Is archiving done for performance? Or SEO? Or just user experience?

    Read the article

  • Compiling vs using pre-built binaries performance?

    - by Nick Rosencrantz
    Will performance be better (quicker) if I manually compile the source for a software component for the actual machine that it will be used on, compared to if the source was compiled on another platform perhaps for many different architectures? I got some good results compiling source that I downloaded and I wonder whether this was due to compiling it instead of downloading a pre-compiled binary which is often the case with software updates.

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • CDN for site with target market in Australia

    - by Jae Choi
    I was told that http://www.edgecast.com/ is very good CDN provider for Australian market. I have a cloud server based in Sydney Australia but was wondering whether it's even worth getting cdn as my target market is only Australia based also. Would I see any performance gain if I use above CDN services or would this be more for sites that target international visitors? I have Apache installed in our server but I would like to install Nginx. Would I see much more gain in performance on this change than CDN or should I go for both as they are all beneficial?

    Read the article

  • Does 3d modeling software *choice used during asset creation affect performance at runtime

    - by user134143
    Does software used to create 3d assets (for game development specifically) have an impact on the efficiency of the program? In other words, is it possible to reduce the operating footprint of an application merely by utilizing alternative development software during production of 3d assets? If you use two different applications to create a 3 dimensional image of a box, can one of them result in better performance if aspects of the image are identical? I am attempting to get the information I need without causing unnecessary debate over specific software choice.

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • SQL Server v.Next (Denali) : Changes to performance counters

    - by AaronBertrand
    In a previous post about changed system objects in Denali , I talked about the changes to memory-related DMVs due to underlying changes in the memory manager. The SQLOS team has posted a great introduction to these changes , and they plan to post more details in future posts. In the meantime, and due to a question yesterday from Tom LaRock ( blog | twitter ): ...I thought I would tell you about some performance counters that have changed between SQL Server 2008 R2 and Denali - most of which involve...(read more)

    Read the article

  • Load Balance and Parallel Performance

    Load balancing an application workload among threads is critical to performance. However, achieving perfect load balance is non-trivial, and it depends on the parallelism within the application, workload, the number of threads, load balancing policy, and the threading implementation.

    Read the article

  • Improving the performance of JDeveloper11g (part 2) and JVMs in general

    - by asantaga
    Just received an email from one of our JVM developers who read my blog entry on Performance tuning JDeveloper11g and he's confirmed that all of the above parameters are totally supported :-) He's also provided a description of the parameters so we can learn what magic is actually being applied. - -XX:+AggressiveOpts -- this enables the latest and greatest JVM optimizations. It will likely help most Java applications. It's fully supported. The downside of it is that because it has the latest and greatest optimizations, there is some small probability that it may not offer as good of an experience. As those features enabled with this command line option have "matured", they are made the default in a future JDK release. So, you can think of this command line option as the place where the newest optimizations get introduced. Some time later they are moved out from under AggressiveOpts to become default behavior. -XX:+OptimizeStringConcat -- only works with the -server JVM. It may be enabled by the default in a future JDK 7 update release. This option delays the construction of a StringBuilder/StringBuffer and attempts to avoid re-sizing the underlying char[] by attempting to detect the size of the char[] to allocate based on what's being appended to the StringBuilder/StringBuffer. -XX:+UseStringCache -- I would not suggest using this unless you knew that JDeveloper allocated the same string over and over again. And, the string that's allocated over and over again is one of the first 100,000 allocated strings. In short, I'd recommend against using it. And, in fact, in Java 7 (currently) does not include this feature. -XX:+UseCompressedOops -- applicable to 64-bit JVMs. And, if you're using a 64-bit JVM, I'd suggest you use it. It's auto enabled in JDK 7 64-bit JVMs and later JDK 6 64-bit JVMs enable it by default too. -XX:+UseGCOverheadLimit -- by default this option is already enabled. One other command line option to consider is -XX:+TieredCompilation for a JDK 6 Update 25 or later, or JDK 7. This gives you the startup of a -client JVM and the peak performance of a -server JVM. Awesome-ness!  Finally, Charlies also pointed out to me a "new" book he's just published where he goes into the details of JVM tuning, a must for all Fusion Middleware tuning exercises..  (click the book)  Thanks Charlie!

    Read the article

  • Compilable modern alternatives to C/C++

    - by Jeremy French
    I am considering writing a new software product. Performance will be critical, so I am wary of using an interpreted or language or one that uses a emulation layer (read java). Which leads me to thinking of using C (or C++) however these are both rather long in the tooth. I haven't used either for a long time. I figure in the last 20 years someone should have created something which is reasonably popular and is nice to code in and is complied. What more modern alternatives are there to C for writing high performance code compiled code? edit in response to comments If C++ is a different beast than it was 15 years ago, I would consider it, I guess I had an assumption that it had some inherent problems. Parallelisation would be important, but probably not across multiple machines.

    Read the article

  • Nike Achieves Scalability and Performance with Oracle Coherence & Exadata

    - by Michelle Kimihira
    Today, we are featuring a customer interview with with Nicole Otto, Senior Director, Consumer Digital Tech at Nike who talks about how they achieved scalability and performance with Oracle Coherence and Oracle Exadata.    Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • How to Structure Bonuses for Software Developers?

    - by campbelt
    I am a software developer, and have been asked to define a bonus structure for myself by recommending the metrics that will determine my bonus. I think, once I have defined this bonus structure, there is a decent chance that it will end up applying it to other members of my department. So, for personal and professional reasons, I want to try to get this right :) I am wondering, what do you guys think are fair and accurate measurements of a software developer's performance? And, if you are a developer or manager of developers, what metrics does your company use to measure developer performance?

    Read the article

  • Best practice on Linux servers and CPU/power throttling?

    - by Valentin
    I am running a couple of Debian 6 (2.6.32) and 7 (3.2) Linux servers and all of them have energy saving settings enabled in their BIOS. Furthermore Linux shows that the CPUs are throttled if the servers are idling. I wonder if this could cause any harm - could there be e.g. performance impacts because Linux would not be able to handle throttling correctly? Is there a best practice for Linux servers and power/CPU throttling? Do you guys switch your energy profiles to "performance" or do you leave both the BIOS and the OS with their default settings? The reason I am asking is that I encountered several performance issues on physical Dell servers although all values (CPU/load, memory, I/O, network etc.) seemed to be normal. After changing the BIOS power settings to "performance" in those specific cases, I was able to get rid of the performance issues.

    Read the article

  • Optimized algorithm for line-sphere intersection in GLSL

    - by fernacolo
    Well, hello then! I need to find intersection between line and sphere in GLSL. Right now my solution is based on Paul Bourke's page and was ported to GLSL this way: // The line passes through p1 and p2: vec3 p1 = (...); vec3 p2 = (...); // Sphere center is p3, radius is r: vec3 p3 = (...); float r = ...; float x1 = p1.x; float y1 = p1.y; float z1 = p1.z; float x2 = p2.x; float y2 = p2.y; float z2 = p2.z; float x3 = p3.x; float y3 = p3.y; float z3 = p3.z; float dx = x2 - x1; float dy = y2 - y1; float dz = z2 - z1; float a = dx*dx + dy*dy + dz*dz; float b = 2.0 * (dx * (x1 - x3) + dy * (y1 - y3) + dz * (z1 - z3)); float c = x3*x3 + y3*y3 + z3*z3 + x1*x1 + y1*y1 + z1*z1 - 2.0 * (x3*x1 + y3*y1 + z3*z1) - r*r; float test = b*b - 4.0*a*c; if (test >= 0.0) { // Hit (according to Treebeard, "a fine hit"). float u = (-b - sqrt(test)) / (2.0 * a); vec3 hitp = p1 + u * (p2 - p1); // Now use hitp. } It works perfectly! But it seems slow... I'm new at GLSL. You can answer this questions in two ways: Tell me there is no solution, showing some proof or strong evidence. Tell me about GLSL features (vector APIs, primitive operations) that makes the above algorithm faster, showing some example. Thanks a lot!

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • How should I account for the GC when building games with Unity?

    - by Eonil
    *As far as I know, Unity3D for iOS is based on the Mono runtime and Mono has only generational mark & sweep GC. This GC system can't avoid GC time which stops game system. Instance pooling can reduce this but not completely, because we can't control instantiation happens in the CLR's base class library. Those hidden small and frequent instances will raise un-deterministic GC time eventually. Forcing complete GC periodically will degrade performance greatly (can Mono force complete GC, actually?) So, how can I avoid this GC time when using Unity3D without huge performance degrade?

    Read the article

  • Recording Available: Oracle ETPM Performance Forum: "Scalability", Wednesday March 21st, at 1pm EST - 4:30pm EST

    - by Rick Finley
    Attached is the recording URL last months Oracle ETPM Performance forum meeting, from Wednesday, March 21st, at 1pm EST to 2:30pm EST.  The topic was “Scalability".  The topic focuses on an overview of important Scalability concetps, scalability testing and troubleshooting, and ETPM scalability characteristics we have seen in our benchmark testing.   Meeting Recording Playback URL:  https://oracletalk.webex.com/oracletalk/ldr.php?AT=pb&SP=MC&rID=67420077&rKey=73798b44e06240dd 

    Read the article

  • Improving 2D Range Query Performance in SQL Server

    When using the BETWEEN operator on multiple columns, you are likely using a 2D range query. Such queries perform very poorly in SQL Server. This article examines rewriting these queries for improved performance. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance Issues

    982896 ... Microsoft Advisory Services Engagement Scenario - BizTalk Server Performance IssuesThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Upgrading to Oracle Enterprise Performance Management Version 11.1.2

    Oracle Enterprise Performance Management Version 11.1.2 offers many great new features for customers upgrading from previous versions of their Hyperion applications. This webcast discusses the benefits of these new features plus the best way to go about planning and implementing the upgrade. AMOSCA, an Oracle Platinum Partner has already completed 15 Oracle EPM upgrades to Version 11.1.x with its customers. Noel Gorvett, Managing Director of AMOSCA shares their experiences of these upgrades to help customers currently considering upgrading to make the best decisions when planning and implementing their upgrade.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >