Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 115/1421 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Intel VTune Performance Analyser 9.1 not working on Win 7 64

    - by ian
    Got the 32bit and 64 bit versions of the Intel VTune Performance Analyser. I installed the 64bit version (I think the installed was the "EMT" one) and when I go to create a new project, upon clicking the button to select an executable to profile, no file dialog popup shows. I got an old laptop and installed the 32bit on to Windows XP and it works fine. Regarding the 64 bit version, I did try changing the compatability to XP SP3 but it still didnt work. Does anyone know how to fix this?

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • recyle application pool,Warm up scripts-Performance tuning in Sharepoint WCM site

    - by joel14141
    I was trying to tune WCM public facing site we have in Sharepoint . I have following doubts By default application pools are set to recycle themselves at 2 am in night and because of that we need warm up scripts . But As I was googling on this topic I found mixed reactions on this some MVP are saying its not advisable to recycle application pool daily and some say otherwise so I am confused. Because if I am not doing recycling application pool then I don't hv to use warmup scripts . But as my site is public facing and its all around the globe so is it advisable that I should recycle it daily as it will affect the performance of my site even though I would run warm up scripts once I don't think so it wud be as good as it should be ....Any advice on that?

    Read the article

  • Maven Cobertura: unit test failed but build success

    - by Pavel Drobushevich
    Hi all, I've configured cobertura code coverage in my pom: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.4</version> <configuration> <instrumentation> <excludes> <exclude>**/*Exception.class</exclude> </excludes> </instrumentation> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> And start test by following command: mvn clean cobertura:cobertura But if one of unit test fail Cobertura only log this information and doesn't mark build fail. Tests run: 287, Failures: 1, Errors: 1, Skipped: 0 Flushing results... Flushing results done Cobertura: Loaded information on 139 classes. Cobertura: Saved information on 139 classes. [ERROR] There are test failures. ................................. [INFO] BUILD SUCCESS How to configure Cobertura marks build failed in one of unit test fail? Thanks in advance.

    Read the article

  • Using Google Webmaster & Analytics, what data to look at to improve website performance?

    - by Rob
    Using data from Google Analytics and Webmaster tools, what data should I be looking at to improve my websites performance? I want to improve the SEO, usability and just general performance of my website. EDIT: It's a portfolio website that we've done the initial SEO for, also optimised all images etc and made the site as fast as possible. What kind of things should I be looking out for in the analytics and webmaster data to improve performance for both the SEO and each individual page.

    Read the article

  • xinet vs iptables for port forwarding performance

    - by jamie.mccrindle
    I have a requirement to run a Java based web server on port 80. The options are: Web proxy (apache, nginx etc.) xinet iptables setuid The baseline would be running the app using setuid but I'd prefer not to for security reasons. Apache is too slow and nginx doesn't support keep-alives so new connections are made for every proxied request. xinet is easy to set up but creates a new process for every request which I've seen cause problems in a high performance environment. The last option is port forwarding with iptables but I have no experience of how fast it is. Of course, the ideal solution would be to do this on a dedicated hardware firewall / load balancer but that's not an option at present.

    Read the article

  • Where can I find project repositories with continuous testing?

    - by Jenny Smith
    I am interested in studying some test logs from different projects, in order to build and test an application for school. I need to analyze the parts of the code which are tested, the bugs which appeared in those parts and eventually how they were resolved. But for this I need some repositories from different (open source) projects. Can someone please help me with ideas or links or any kind of test logs which might be useful? I really need some resources, so any help is appreciated.

    Read the article

  • Increase the compression performance of VPN

    - by Martin
    I am currently switching from a system with HPN-SSH tunnels and enabled compression to something VPN based. I have tried tinc and n2n so far, hamachi requires a library I do not have. In my primitive benchmarks I am not satisfied with the achievable bandwidth compared to the SSH tunnels. In tinc the low LZO setting performed best, but compression is only available in UDP mode. Ideally I would like to have a TCP-based VPN with a multi-threaded compression. Can you suggest me some ideas how to increase the performance? Would it be possible to somehow put a compression filter in front of the tun interface? Or are there any VPN implementations that might be better suited for my needs (fast compression, TCP-based, switch mode, does not have to be super-secure)? I would consider tunnelling Ethernet over SSH, but according to some articles it is not advisable.

    Read the article

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • recommendations for a lightweight linux distribution for a test server

    - by Jack
    I'm planning on setting up a test server to experiment with some application servers (tomcat/jboss/...) and with some portals. Now the machine I've set aside for this is lightweight CPU/GPU wise(Atom D510, 4 gigabyte ram, 500 GiB hdd, onboard GPU). But it should suffice for most things, I'm more interested in the stability of JBoss/Tomcat for my purposes than the stability. However I'm having a bit of trouble picking an appropriate distribution size/performance/setup time wise/security wise since it seems I can't sneeze without another distribution popping up. I've been thinking about going for Fedora since I've read that that distribution has been optimized for Atom, but I'm not really familiar with it. My experience with Linux has mostly been limited to Ubuntu and some tinkering with puppylinux. I'm not afraid to get my hands dirty using the command line. I'm not planning on starting a discussion per se, mostly the pros/cons that people have encountered with some distributions

    Read the article

  • ASA Slow IPSec Performance

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremly poor performance when traffic passes over the VPN. CPU on the device is 13%, Memory at 408 MB, and active VPN sessions 2 so the load on the device is particularly low. Screenshot of wireshark file transfer between the two hosts over the VPN: The large amount of Header checksum failures is alarming, but I am not sure what to check now. I perf is showing around 4-5 Mbit/sec with differing TCP window sizes. Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • server performance metrics report and practicality

    - by Anjesh
    I have a need of preparing web server (apache-php) performance report containing important metrics like CPU usage, disk io, memory usage on user basis. Couple of domains are hosted in the same server and they run from separate users using fcgi. The reason being sometimes some hosted applications take lots of cpu usage, making the server slow for other applications (running as separate users). i am planning to develop scripts for this, as i can't seem to find any simple utilities for this purpose. This script will take snapshots of the user wise metrics at defined periods say 15 minutes and record it. Any abnormalities will be reported via emails. How practical is that? also would be interesting to know what else need to be recorded.

    Read the article

  • Which JavaScript graphics library has the best performance?

    - by DNS
    I'm doing some research for a JavaScript project where the performance of drawing simple primitives (i.e. lines) is by far the top priority. The answers to this question provide a great list of JS graphics libraries. While I realize that the choice of browser has a greater impact than the library, I'd like to know whether there are any differences between them, before choosing one. Has anyone done a performance comparison between any of these?

    Read the article

  • How do you demonstrate performance in paired-programming environments?

    - by NT3RP
    Performance reviews have come up recently at my work, and I was put in an interesting position. Our team does a lot of pair programming, which has a tendency of averaging out the skill differences between team members (especially considering we rotate pairs). Generally, when doing performance reviews, you look back at the work you've done, and demonstrate what you've accomplished, and how you've exceeded expectations to try to negotiate a raise or other benefits. How do you demonstrate (or even measure) individual performance in an environment like this?

    Read the article

  • Slow query with unexpected scan

    - by zerkms
    Hello I have this query: SELECT * FROM SAMPLE SAMPLE INNER JOIN TEST TEST ON SAMPLE.SAMPLE_NUMBER = TEST.SAMPLE_NUMBER INNER JOIN RESULT RESULT ON TEST.TEST_NUMBER = RESULT . TEST_NUMBER WHERE SAMPLED_DATE BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows clustered index seek if i replace * in SELECT clause to RESULT.TEST_NUMBER (covered with index) - then all become fast in first case too. this points to hdd io issues, but doesn't clarifies changing plan. so, any ideas?

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

  • How to benchmark kernel (-Os vs -O2)

    - by NightwishFan
    It seems logical to me that on a 64-bit kernel compiling it to optimize for size might help overall. (My distro of choice uses -O2) It has the benefits of more registers and memory and perhaps less cache contention than normal optimized code. I have a kernel compiled like this and it seems excellent. However my question is how can I prove this? I like using Phoronix for "real world" sort of benchmarks so I would prefer to test cases like that. What should I pick to test? Does anyone else have any alternatives? Thank you very much in advance.

    Read the article

  • (C++) What's the difference between these overloaded operator functions?

    - by cv3000
    What is the difference between these two ways of overloading the != operator below. Which is consider better? Class Test { ...// private: int iTest public: BOOL operator==(const &Test test) const; BOOL operator!=(const &Test test) const; } BOOL operator==(const &Test test) const { return (iTest == test.iTest); } //overload function 1 BOOL Test::operator!=(const &Test test) const { return !operator==(test); } //overload function 2 BOOL Test::operator!=(const &Test test) const { return (iTest != test.iTest); } I've just recently seen function 1's syntax for calling a sibling operator function and wonder if writing it that way provides any benefits.

    Read the article

  • Load testing nginx inside AWS

    - by andy
    I'm trying to load test nginx running on AWS. I need to try to optimise it to handle 1Gbps of inbound traffic. Currently I've got it to peak at 85Mbit/s by running nginx on an m1.large with 4 other machines hitting it by using ab with -i (for head requests) -k (keepalives) -r (ignore failed requests) -n 500000 -c 20000. I'm struggling to generate more than 85 Mbit/s traffic from 4 machines, yet when I do scp a large file I get nearly 0.25Gbit/s of traffic going over the network. Are there any tools or approaches that I could use to load test nginx that might generate more load? I'm only interested in inbound traffic, so perhaps a DoS tool could help if it chucks away responses? I'm hitting a very small (40 byte) static asset, and have peaked at handling 50K concurrent connections and getting 25k reqs/s when just using a single load generator machine.

    Read the article

  • Rails Functional Test Failing Due to Association

    - by Koby
    I have an accounts model that holds some basic account info (account name, website, etc). I then have a user model that has the following in the app/models/user.rb belongs_to :account I also have the following in my routes.rb map.resources :account, :has_many => [:users, :othermodel] the problem I'm facing is that the following test is failing: test "should create user" do assert_difference('User.count') do post :create, :user => { } #this is the line it's actually failing on end assert_redirected_to user_path(assigns(:user)) #it doesn't get here yet end The error it gives is "Can't find Account without ID" so I kind of understand WHY it's failing, because of the fact that it doesn't have the account object (or account_id as it were) to know under what account to create the user. I have tried variations of the following but I am completely lost: post :create, :user => { accounts(:one) } #I have the 'fixtures :accounts' syntax at the top of the test class post :create, [accounts(:one), :user] => { } post :create, :user => { accounts(:one), #other params for :user } and like I said, just about every variation I could think of. I can't find much documentation on doing this and this might be why people have moved to Factories for doing test data, but I want to understand things that come standard in Rails before moving onto other things. Can anyone help me get this working?

    Read the article

  • Performance & Security Factors of Symbolic Links

    - by Stoosh
    I am thinking about rolling out a very stripped down version of release management for some PHP apps I have running. Essentially the plan is to store each release in /home/release/1.x etc (exported from a tag in SVN) and then do a symlink to /live_folder and change the document root in the apache config. I don't have a problem with setting all this up (I've actually got it working at the moment), however I'm a developer with just basic knowledge of the server admin side of things. Is there anything I need to be aware of from a security or performance perspective when using this method of release management? Thanks

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >