Search Results

Search found 267 results on 11 pages for 'benchmarking'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • The JRockit Book is Now in Print!

    - by Marcus Hirt
    Yes. I know. It’s been in print for some days already, but I haven’t found time to write about it until now. The book is a good guide for JVM’s in general, and for JRockit in particular. If you’ve ever wondered how the innards of the Java Virtual Machine works, or how to use the JRockit Mission Control to hunt down problems in your Java applications, this book is for you. The book is written for intermediate to advanced Java Developers. These are the chapters: Getting Started Adaptive Code Generation Adaptive Memory Management Threads and Synchronization Benchmarking and Tuning JRockit Mission Control The Management Console The Runtime Analyzer The Flight Recorder The Memory Leak Detector JRCMD Using the JRockit Management APIs JRockit Virtual Edition Appendix A: Bibliography Appendix B: Glossary Index The book is 588 pages long. For more information about the book, see the book page at Packt.

    Read the article

  • IFMR Conference – Global Procurement & Supply Chain Management for the Oil & Gas Industry

    - by Pam Petropoulos
    Dates: June 9 - 11, 2014Location: JW Marriott Houston, TXThis 2nd Global Procurement and Supply Chain Management Conference for the Oil & Gas Industry will cover key market challenges including: - supplier / operator relationships- benchmarking strategic procurement and category management- capacity overload vs. demand- new frontiers /new procurement strategies - sustainability in procurement and supply chain With a one-track focus, this is a highly intensive, content-driven event that includes case studies, presentations and panel discussions over two full days.Plan to attend the Oracle presentation on day one, and the Oracle panel discussion on day two. Oil & Gas experts will be available in the Oracle booth to answer questions.Click here to learn more and register.

    Read the article

  • Smart Grid Gateway and New Meter Data Management released

    - by Anthony Shorten
    Two products have just been released and are available from edlivery.oracle.com. Smart Grid Gateway 2.0.0 - A new product to integrate to Smart Grid networks Meter Data Management 2.0.1 - A new version of the Meter Data Management product. These products are the first products to use the brand new version of the Oracle Utilities Applicaton Framework (V4.1). The new framework builds up on FW2.2 and FW4.0.2 to add exciting new features (this is just a subset): Support for Database Vault Enhancements to Business Object Maintenance Batch Statistics Portal for benchmarking Custom template user exit support File permissions now consistent with other Oracle products Use of Universal Connection Pool for all database pool access Ability to manage the batch data cache Over the next few weeks I will be publishing articles and updates to existing whitepapers to highlight all the new features.

    Read the article

  • Switching between Discrete and Integrated GPUs

    - by void-pointer
    Hello everyone, I develop CUDA applications on my Alienware M17x portable back-breaker, which has two discrete GTX 285M GPUs and one integrated GeForce 9400M GPU. I can currently switch between them using NVIDIA's software, but I would like the ability to do so within my applications for purposes of benchmarking and general convenience. Apparently this requires the "NDA version" of NVIDIA's Driver API, which I know not how to obtain. Would using this API be the only way to accomplish what I seek, and if so, how would I obtain it? A solution using Windows APIs would also be acceptable, though less preferable to one which would leverage a cross-platform API. I have created a similar thread concerning the matter on NVIDIA's forum, which is down at the time of this writing. Thanks for reading my question; it is much appreciated!

    Read the article

  • GNU make: should -j equal number the number of CPU cores in a system?

    - by Johan
    Hi What is you experience with the make -j flag? There seem to be some controversial if the jobs are supposed to be equal to the numbers of cores, or if you can maximize the build by adding one extra job that can be cued up while the others "work". The question is if it is better to use -j4 or -j5? And have you seen (or done) any benchmarking that support one or the other? Thanks Johan

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • Terminology: opposite of "zero copy"?

    - by Mark Harrison
    We're benchmarking some code that we've converted to use sendfile(), the linux zero-copy system call. What's the term for the traditional read()/write() loop that sendfile() replaces? I.e., in our report I want to say "zerocopy is X millisecs, and ??? is Y millisecs." What word/phrase should I use?

    Read the article

  • Running on Windows CE 6 'and' Windows XP

    - by Psychic
    Is it possible to create a small program that will run, without recompiling and without emulators, on both Windows CE 6 AND Windows XP SP3? From my knowledge, this isn't possible. Source code needs to be recompiled for the target platform. However, a hardware manufacturer for embedded boards is claiming otherwise. The application isn't anything complex, just a simple benchmarking tool analysing floating point operations, CPU ticks etc, and displaying the results on a plain GUI.

    Read the article

  • crowd website simulation on localhost for a php/mysql project

    - by Mac Taylor
    hey guys I searched for a while on how to find a benchmarking software that can simulate crowd website with more than 1000 users online to find out leaks in my php/mysql script . as long as i ran my script for a huge community and it wasn't successful enough and lots of RAM usage happened , now I need a way to simulate that much usage to benchmark my script and optimize it . I am using XAMMP Local Server and my project written in PHP&MYSQL. thanks in advance

    Read the article

  • CheerryPy and concurrency

    - by RadiantHex
    Hi folks, I'm using CheeryPy in order to serve a python application through WSGI. I tried benchmarking it, but it seems as if CheeryPy can only handle exactly 10 req/sec. No matter what I do. Built a simple app with a 3 second pause, in order to accurately determine what is going on... and I can confirm that the 10 req/sec has nothing to do with the resources used by the python script. __ Any ideas?

    Read the article

  • Best Embedded SQL DB for write performance?

    - by max.minimus
    Has anybody done any benchmarking/evaluation of the popular open-source embedded SQL DBs for performance, particularly write performance? I've some 1:1 comparisons for sqlite, Firebird Embedded, Derby and HSQLDB (others I am missing?) but no across the board comparisons... Also, I'd be interested in the overall developer experience for any of these (for a Java app).

    Read the article

  • Organizing github repository for java 6 and 7

    - by Edmon
    I am wanting to create a gihub repository that offers benchmarking code that works for concurrent features available only in JDK 1.7 (Fork/Join) as well as for older ones found in JDK 1.6. Offering both options is important for what I need. Does anyone have a recommendation how should I structure the repository. I was planning on having a repo called and under it: jdk17 build src mycode ... jdk16 build src mycode Please suggest any alternatives, possibly use of Maven or other more practical approaches, if any.

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Understanding this error: apr_socket_recv: Connection reset by peer (104)

    - by matthewsteiner
    So, if I do some benchmarking with apache benchmark (ab), and I use large numbers of requests. Then sometimes in the middle of a test I get this error. I don't even know what it means. So how can I fix it? Or is it just something that will happen if the server gets too many hits anyway? The problem is, if I run 10,000 hits, it'll all run perfectly. If I run it again, it'll get to 4000 and get the error: apr_socket_recv: Connection reset by peer (104) A little about my setup: I have nginx taking static requests and processing dynamic ones to apache. The file in question is served from cache by nginx, so I guess it's probably got to do with how nginx is handling the requests? Ideas?

    Read the article

  • Merit and demerits for various Linux fiberchannel multipath options

    - by wzzrd
    On our Linux servers, we currently use HPs qla2xxx drivers, because it has multipathing (active/passive) built in. The are, however, various other options, like Red Hats device-mapper-multipath with the stock qla2xxx drivers (multibus and failover) and things like SecurePath and PowerPath (both of which can do trunking, iirc). Can someone tell me what the merits and demerits of the various options are (if I can ask such a question), besides the obvious fact that the {Secure,Power}Path options cost vast amounts of money? I'm mainly interested in the freely available options, like HPs qla2xxx vs. Red Hats multipathd and possible other open source solutions, but I would like to hear good reasons to go for the commercial solutions too. UPDATE: I'll be benchmarking various options the coming few days (the average of 10 runs of iozone for each option (options being native qla2xxx failver, native qla2xxx multibus, HP qla2xxx failover)). I'll post a summary of results here for those interested.

    Read the article

  • autobench in ubuntu 8.10

    - by mamathahl
    Hi, I'm using ubuntu 8.10. I want to do benchmarking using autobench. I could install httperf by the command sudo apt-get install httperf I thought I should be installing autobench in the same way using apt-get. But the package was not found. Can anybody please suggest me what should I be doing in order make this "autobench" command work for me in ubuntu? Any help in this regard will be appreciated. Thanks in advance.

    Read the article

  • How to limit disk performance?

    - by DrakeES
    I am load-testing a web application and studying the impact of some config tweaks (related to disk i/o) on the overall app performance, i.e. the amount of users that can be handled simultaneously. But the problem is that I hit 100% CPU before I can see any effect of the disk-related config settings. I am therefore wondering if there is a way I could deliberately limit the disk performance so that it becomes the bottleneck and the tweaks I am trying to play with actually start impacting performance. Should I just make the hard disk busy with something else? What would serve the best for this purpose? More details (probably irrelevant, but anyway): PHP/Magento/Apache, studying the impact of apc.stat. Setting it to 0 makes APC not checking PHP scripts for modification which should increase performance where disk is the bottleneck. Using JMeter for benchmarking.

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • Benchmarks relevant for a Visual Studio .Net development workstation

    - by user30715
    I am developing a system with Windows 7-64, Visual Studio and Sharepoint on a virtual workstation on some kind of VMWare server. The system is painfully slow, with VS lagging behind when entering code, Intellisense lagging, opening and saving files takes ages when compared to a normal budget laptop. As far as I can see the virtual machine has OK specs and does not seem to be swapping etc., and the IT dept also says that they can't see anything wrong when they're monitoring the system. As long as the problem is not well-documented, the IT dept and management does not want to throw money (=upgraded laptops) at us, so I need to show some sort of benchmark. It has been many years since I did any system benchmarking, and I don't know the current benchmark software, so my question is which benchmark will be most relevant for Visual Studio performance? Not just for compiling fast, but also to reflect the "responsiveness" of the system. Cheers, user30715

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Apache CPU usage stays at 100% even when there are no requests

    - by Leirith
    Hi, I've been running the Apache HTTP server benchmarking tool (ab) on my new Apache server to test performance. I noticed that with a command like the following: ab -n 100000 -c 1000 http://www.mysite.com/ The CPU is used 100% by the apache2 processes during the testing. When the test concludes, usually with the following error just before the last requests are made: apr_poll: The timeout specified has expired (70007) Total of 99960 requests completed the CPU usage remains at 100%, and it's all being consumed by apache. I am using the worker MPM with and running PHP with mod_fcgid. Any advice as to why this is or what can be done to stop it would be appreciated.

    Read the article

  • How to monitor nginx proxy cache?

    - by Isaac
    I would like to see which objects get cached by my nginx reverse proxy (with an apache as a backend). So far I could not find a way, only the info that its not implemented yet. The reason is that I would like to tweak my configuration for best performance without putting too much stress on the server, as the backend is a production system. I know benchmarking would be better, but its not an option right now. So I though an alternative measure would be to monitor the cache. Is that possible, and if yes, how? (despite patching nginx with the patch mentioned in the link above)

    Read the article

  • Disk controller speed responsible for slow write speeds?

    - by vizvayu
    I have question. I'm using ESXi 4.0U1 in an IBM x3200M2 with an integrated LSI 1064e RAID controller, without any kind of cache. I have 3 250GB HOT-SWAP SATA HDs configured in RAID1E (IME). ESXi works fine, read speed are quite OK, but write speeds are incredible slow, never more than 8MB/s, and this is the best case scenario, benchmarking with iozone streaming writes, using a VMWare Paravirtual controller and with only this VM active, no swapping of any kind (total vm memory reserved). Already wrote to IBM but I don't have any kind of pay support so they didn't even answered, so I'm just wondering... anybody has any experience with a similar setup? I just want to be sure this is hardware related and can't be fixed with some kind of config option, because I'm thinking on buying a new RAID controller (Adaptec 2405 looks nice). Thanks again!

    Read the article

  • Will installing an Ultra ATA cable backwards affect performance?

    - by GMMan
    I've recently purchased a hard drive upgrade for my Xbox 320GB WD Caviar Blue WD3200AAJB and StarTech.com Ultra ATA/66/100/133 cable IDE66 yes I'm crazy When it came to installing the cable, it was too short (my fault), and there wasn't enough space between the master and slave ends to reach both the DVD drive and the hard drive. The only thing I could do was install the cable backwards and twisting it quite a bit to make it fit. The upgrade works, but reading the manual for the hard drive I replaced (10GB Seagate U Series 5), apparently there is a specific way you have to connect the cable. I don't have that option, so the question comes down to, will my drive performance be at Ultra ATA levels, or is it still performing at original ATA speeds? Is there any way I can test this (benchmarking software for Xbox)?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >