Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 558/883 | < Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • Restart single uWSGI application (when it's in emperor mode)

    - by Oli
    I'm running uWSGI in emperor mode to host a bunch of Django sites based on their individual configs. These are supposed to update when it detects a change in the config file and this largely works when I just touch uwsgi.ini the relevant file. But occasionally I'll mess something up in the Django site and the server won't load. Yeah, yeah, I should be testing better but that's not really the point. When this happens, uWSGI seems to mark the site as dead and stops trying to run it (seems to make sense). Even after I fix the underlying issue, no amount of touching will get that site's uWSGI process up and running. I have to reload the whole uWSGI server (knocking dozens of sites out at once for a few seconds). Is there a way to force uWSGI to just reload one of its sites?

    Read the article

  • SonicWALL NetExtender - Client Install?

    - by JArmani
    We are about to push out a new VPN solution for our organization. One of the beautiful things we saw in SonicWALL's SSL-VPN was the thin, browser-based solution of NetExtender. Does anybody have experience with this? My specific concern is that, at least in Windows 7 during testing, it prompts for admin credentials to install the ActiveX NetExtender plugin, which is standard for installing anything in a Windows domain environment. But doesn't this mean I actually have to go in and install the client on all domain laptops that will be using the VPN in the field? They wouldn't actually be able to simply visit the site and run the client, as advertised? By the way, we're using the SonicWALL NSA 3500 device. We do have ManageEngine's Desktop Central, which can push out software installations, but it usually has to be in the form of a .MSI package. Is there any solution to this, besides hitting up all my organization's computers?

    Read the article

  • How to set the network profile of Windows 7 via group policy?

    - by Ricket
    We are deploying client computers and in testing noticed that the first time the user logs into the computer, it asks them if the location is a home, work, or public location. We are worried that some users in our workplace might misread it (or not read it at all) and click Public, thus likely denying our access to the computer and messing up security settings and such. Can we set our network to be a "Work Network" location via group policy or some other mechanism of our Windows Domain so that the user is not prompted when connected to our network? Also these are laptops, so we don't want every network they connect to be set as work network, and we have several access points (wired and three wireless) which our users often switch between so I'm not yet sure if it reprompts with each access point but I have the feeling it will, and I would like all of these to be set to the Work profile type.

    Read the article

  • How to install a proxy LDAP

    - by Jean-Claude
    I have to install an LDAP proxy on a compute cluster frontend. The idea is to avoid the compute nodes to make too many requests on the campus LDAP server. How can I install this to make it work with the school's LDAP? The frontend OS is a RHEL 6.2. I found that I have to install the LDAP server and configure it as a proxy. But all I can find is examples of /etc/openldap/slapd.conf file configuration but after testing different configuration, no results. Furthermore, according to RHEL 6 - Deployment Guide, this config file is obsolete: OpenLDAP no longer reads its configuration from the /etc/openldap/slapd.conf file. Instead, it uses a configuration database located in the /etc/openldap/slapd.d/ directory. Any help is welcomed. Thank you

    Read the article

  • How to get the spec of a machine on Linux?

    - by machinePurchaser
    I am interested in getting the spec of a machine, because I am thinking getting a similar server. What I am mostly interested in knowing is the number of cores / CPUs / etc., the amount of memory, the speed of the CPUs, the CPU cache size, and any other detail which is important for performance. My question is two-fold: Which parameters should I be interested in other than the ones I specified above? Is there an easy way to read them off the machine in Linux? cat /proc/cpuinfo reveals a lot about the CPUs, for example... What about memory (would rather not rely on top), etc?

    Read the article

  • Mitigating the 'firesheep' attack at the network layer?

    - by pobk
    What are the sysadmin's thoughts on mitigating the 'firesheep' attack for servers they manage? Firesheep is a new firefox extension that allows anyone who installs it to sidejack session it can discover. It does it's discovery by sniffing packets on the network and looking for session cookies from known sites. It is relatively easy to write plugins for the extension to listen for cookies from additional sites. From a systems/network perspective, we've discussed the possibility of encrypting the whole site, but this introduces additional load on servers and screws with site-indexing, assets and general performance. One option we've investigated is to use our firewalls to do SSL Offload, but as I mentioned earlier, this would require all of the site to be encrypted. What's the general thoughts on protecting against this attack vector? I've asked a similar question on StackOverflow, however, it would be interesting to see what the systems engineers thought.

    Read the article

  • Fresh install of Xenserver 6.2 , cannot load tools in Guest Win7

    - by Erik
    I've just started testing the XenServer 6.2 offering.. it's awesome so far.. I've event loaded all the patches and hot fixes, and started a windows 7 guest image. I want to install tools, but whenever I click the install tools boxes.. I'm taken to my VM console and nothing loads. It's a brand new guest, and most of the advice is for those with previous versions of tools loaded. Any ideas how to fix this?

    Read the article

  • Connection shortcut doesn't work at startup - Win 7

    - by kikio
    Hello. I want connect automatically to a network with a dial-up connection at windows startup. So after I created a new connection, I created a shortcut from this and placed it in "Startup" folder (at start menu). But after restart my system, windows 7 was coming up without start to connect to my network (that shortcut in startup didn't work!). but I placed a shortcut from Mozilla Firefox in startup folder for testing that, and it started at windows booting. My windows is WINDOWS 7 ULTIMATE. What can I do??? Please help me!!!

    Read the article

  • MSSQL, ASP.NET, IIS. SQL Server perfmon log question

    - by Datapimp23
    Hi, I'm testing a web application that runs on a hypervisor. The database server and the webserver are seperate vm's that run on the same hypervisor. We did some tests and the functions perform ok. I want you guys to look at a screenshot of a permon log of the sql 2005 server on the busiest moment. The webserver perfmon log looks fine and it's obvious that we have enough resources to present the page in a timely fashion. http://d.imagehost.org/view/0919/heavyload http://d.imagehost.org/0253/heavyloadz.jpg Zoomed out The striped blue line maxing out is the Processor que length (scale 100,0) The green line at around value 30 is Available MBytes (scale 0,01) The rest of the counters are visible on the screenshot. The sql server machine has no CPU limitations on the hypervisor resources and has 5 vcpu's and 5 GB RAM. Can someone help me to interpret this log. Thanks

    Read the article

  • Sudden slow read & write speed on all IO

    - by user23392
    I have a custom built rig that has 2 storage drives. for OS: Western Digital 1.0TB HARD DR 64MB for other stuff: Corsair Performance 3 128GB (SSD) [ expected read speed: 400 mb/s ] The system was incredibly fast for a couple of months, then one day i was playing a game then it started to get buggy (some sounds and objects disappearing), i stopped the game and the system seemed to be unstable so i had to shut it down, next morning i couldn't start it up, it was saying something about corrupt device. I formatted both disks and installed a fresh copy of windows, all i can say that since that day the system was never like before, it takes 10 minutes to boot up (the icons and desktop slowly appear). but once it's done the slowness isn't as noticeable. Here's my benchmark on the HDD ( read speed - write speed ): And the SSD: Anyone knows what could be the issue?

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • Client PC not booting when certain TFT plugged in - TFT or graphics card failure?

    - by Chake
    here comes something quite strange: On a client machine (DELL Vostro 420) we experienced problems when booting: when turning on the machine beeps normal but doesn't display anything and doesn't boot. After some testing I found out, that this only happens if one (of the two) monitors (Iiyama ProLite E2472HDD) is plugged in while booting. If the other monitor (TFT 2) is plugged in everything is fine. Here a small illustration, TFT 1 is the bad guy: TFT 1 | TFT 2 | failure x | x | x x | | x | x | After BIOS-Phase I can safely plug in TFT 1 and everything works just fine. The question is, what can be done to avoide this behavior: Change monitor? (Iiyama ProLite E2472HDD) Change graphics card? (GeForce 9800 GT) Other suggestions?

    Read the article

  • Evaluate a vendor laptop before deployment to user?

    - by NetWarrior
    I get numerous requests from executives and users for new smaller laptops for travel purposes. Most of my evaluation is based upon whether or not it can run certain applications. Mainly lotus notes, office, and video. Most of the laptops include windows 7 OS, and are fully loaded with ram, a high-end processor and a integrated graphics card. My boss whats me to document the usefulness of the laptop and performance. I'm just a little confused on how to setup a document that can be used by members of the IT department for future evaluations.

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • When load balancing, must all copies of static web page be exactly the same?

    - by Gilles Blanchette
    I am used to get answers for everything on the web, but not this time... Yesterday I enable Amazon DNS weight functionally to load balance 7 websites between two different IP addresses (split 50%-50%). Both servers run IIS 8.5, sites runs well on both sides. Today I found out that Google WebMasterTools is reporting fails error with file robots.txt, all close to 50% of access try errors. The robots.txt file is ok and accessible (even via Google testing URL page) on both servers. Lets say current version of static web pages are on the first computer and the updated version of the same web pages are on the second computer. Can it be the problem? When load balancing, can static web pages be slightly different from one host server to the other? Thank you for your help

    Read the article

  • Missing 16:10 resolutions with Nvidia drivers (Can't add resolutions)

    - by Wuinny
    Hello, I have a laptop with a Nvidia 9650M GT and used the drivers that Seven brought me. It works fine but Metro 2033 tells me that i have to upgrade my drivers to play the game. So i did it. But since i did a clean install of the new Nvidia drivers, i just have 1440*900 or 4:3 resolutions. I usually played with 1280*800 or 1184*740 (for performance issue) With the "old" drivers i was able to create custom resolution (1184*740) in Nvidia control panel but now when i try it tells me that "my monitor cannot support this resolution". When i insist, it works but soon as i shut down my computer i have to recreate it.. Do anyone have a fix ? Thank you

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • How do I log file system read/writes by filename in Linux?

    - by Casey
    I'm looking for a simple method that will log file system operations. It should display the name of the file being accessed or modified. I'm familiar with powertop, and it appears this works to an extent, in so much that it show the user files that were written to. Is there any other utilities that support this feature. Some of my findings: powertop: best for write access logging, but more focused on CPU activity iotop: shows real time disk access by process, but not file name lsof: shows the open files per process, but not real time file access iostat: shows the real time I/O performance of disk/arrays but does not indicate file or process

    Read the article

  • How is it possible for SSD's drives to have such a good latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives have such a low latency, whereas they are using NAND chips? (or at least lot better than what a typical single NAND chip would do) EDIT: I think under-estimate NAND chip capabilities. USB drives, while powered by NAND's are mostly limited by USB protocol (which have a pretty high latency) and the USB controller. That explain their poor performance in some cases.

    Read the article

  • Minimize writes to SSD disks with Windows 7

    - by mark
    Most people use their SSD as their primary system installation disk with Windows 7. W7 already has a lot of optimizations for SSDs, both in terms of performance and lifetime. Minimizing writes increases the lifetime of SSDs, so post each suggestion as an answer and let others vote on them. Update: I'm not sure anymore that minimizing writes is a good thing [tm], hard facts that SSDs will degrade within a noticeable time are missing and it seems this it can create a bit FUD about the functionality of the SSD. In other words: I question the usefulness of my wiki question.

    Read the article

  • Virtualbox, slow upload speed using nat

    - by user1622094
    Im running Virtualbox on a Ubuntu 12.04 server (host) and I'm running a Windows 7 as guest os. Im using the (virtual) Intel PRO/1000 MT network card. I get good network performance for download using both nat and bridged network settings but upload speed is really slow using nat. I have tied this on tow different servers, one brand new, and one a several years old, both gave the same result. If you can explain this behavior or have ideas of further test I can perform please let me know.

    Read the article

  • Is running multiple databases on login going to make my Mac really slow?

    - by Walrus the Cat
    Sometime ago, I installed Postgres, and the Launch agent that causes it to run when I log in. Just now, I did the same thing for Mongo. I was just about to do it for Couch. I don't remember if I ever did it for MySQL, but I probably did. Mongo and Couch are just 'when I have time to look into it' sort of things, but I don't want to have to remember to start them when I do. I have a 2.4 Ghz processor and 8 GB ram. Is this sort of behavior going to significantly impact my computer's performance? Should I be scrambling to uninstall all but the database I'm currently using, or can I install all the things and run them all the time? Thanks

    Read the article

  • Problem installing a w2k DC on Hyper-V?

    - by Tony
    Hi, We have a cluster with four node windows 2008 r2 and hyper-v installed. We would like to install 2 VM with role domain controller w2k (the domain is different from the domain of the hyper-v cluster). Do you know if there are any restriction on doing it? Some collegues say that we risk data corruption if we do live migrations. Others speak about the fact that Microsoft don't support w2k any more. And others have doubts because the global catalog server installed on these DC could have loss of performance. Any idea? Thanks Tony

    Read the article

  • How would I measure the amount of RAM needed per Glassfish domain? [closed]

    - by oligofren
    Possible Duplicate: Can you help me with my capacity planning? In our test environment we have a lot of apps spread out over a few servers and Glassfish domains. To make versioning easier I would have liked to have one Glassfish domain per customer per app (kind of like a heavyweight version of lots of jetty instances). But I have heard that Glassfish is kind of heavy on the resources, and so I would need to measure approximately how many instances would fit in the available RAM. These are low-traffic/low load testing servers, so CPU is not really an issue, though RAM might be. How would I get an approximate measure of how much RAM is needed? This is one Glassfish 3 instance with one heavy EAR application deployed. top? jvmstats? ??

    Read the article

< Previous Page | 554 555 556 557 558 559 560 561 562 563 564 565  | Next Page >