Search Results

Search found 28685 results on 1148 pages for 'query performance'.

Page 221/1148 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Which network management system (NMS) to choose?

    - by QrystaL
    I need to integrate NMS in large enterprise system for data collection purposes. Primary requirements: collection by SNMP great scalability (up to 1,000 devices with 1,000 interfaces each) failover data storage in Oracle DBMS integration API (configuration, data access) Any ideas would be appreciated...

    Read the article

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • How passively monitor for tcp packet loss? (Linux)

    - by nonot1
    How can I passively monitor the packet loss on TCP connections to/from my machine? Basically, I'd like a tool that sits in the background and watches TCP ack/nak/re-transmits to generate a report on which peer IP addresses "seem" to be experiencing heavy loss. Most questions like this that I find of SF suggest using tools like iperf. But, I need to monitor connections to/from a real application on my machine. Is this data just sitting there in the Linux TCP stack?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • Enabling DMA in pc with windows 7

    - by Mugen
    I was checking my pc settings using AIDA64 (supposed to be the successor for Everest - it basically shows you detailed hardware you currently have). For my ATA hard disk it shows the setting for DMA as "supported, disabled". But when I checked the windows setting I see that it is actually enabled. How can I find out which is correct? And if its disabled what do I do to enable it? Thanks for your help. Here are some screenshots for this:

    Read the article

  • HD latency measurement using bonnie++ on different machines with different RAM size

    - by j0nes
    Hello, I have run bonnie++ v1.96 on two different servers without any additional load. One server is a "physical" Dell server with 32GB RAM, the other one is a virtual instance with 14GB RAM. I have read in the bonnie manuals that I should use two times the size of RAM in my bonnie runs, so I used 64GB on the physical machine and 28GB on the virtual machine. Now I want to compare the results, and I am wondering whether the results are comparable at all. The most interesting part is the latency part - on the physical machine, the values are about 10 times higher than on the virtual machine! Can I take these results seriously (e.g. the virtual machine HD is much much faster) or does the different RAM size tamper the results? Thanks! Jonas

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • NFS robustness or weaknesses

    - by Thomas
    I have 2 web servers with a load balancer in front of them. They both have mounted a nfs share, so that they can share some common files, like images uploaded from the cms and some run time generated files. Is nfs robust? Are there any specific weaknesses I should now about? I know it does not support file locking but that does not matter to me. I use memcache to emulate file locking for the runtime generated files. Thanks

    Read the article

  • svchost.exe @ 100% disk utilization vs. Outlook.ost

    - by Aszurom
    Vista x32 box with Outlook 2007. Outlook is not running. Hasn't been fired up for several reboots. I stopped WMI service and Windows Search service. Machine is mostly quiet, and then servicehost.exe launches an instance and starts banging away at Outlook.ost file. I can't determine what is causing it. I'm watching it in processmon, and trying to investigate it with preocessexplorer. Not having much luck at figuring out why the machine is so interested in that file. NOTHING is running that should be touching it.

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • Oracle Hangs on Responses Intermittently

    - by Ryan Cook
    I want to preface this with the fact that I am a developer and I am not even close to a DBA, plus I am new to Oracle. OK, here it goes: I have a Java application which uses spring and hibernate. Its a simple CRUD app and I will leave the details out as I don't think they are the issue. I have noticed that my app runs fine when I use MySql, but when I use an Oracle 10.2 server every 7th-10th hangs for 5-10 seconds. My Oracle installation was done by me using all defaults, same as the mysql install. I don't even know where to start looking. Any ideas? Thanks in advance and sorry that I lack the details that are most likely required for help.

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • Ultra-lightweight web browser?

    - by zildjohn01
    Are there any good super-lightweight graphical web browsers out there? I'd like to be able to browse the web on an old PC, but the mainstream crop of browsers is just too heavy, and I don't want to resort to something like Lynx. There must be something decent out there that'll fit in 16 or 32MB of RAM comfortably. 100% standards compliance isn't necessary, but I'd like something that supports the most widely used parts of CSS and JavaScript. The goal is to get 98% of sites usable in a nice, graphical format.

    Read the article

  • Free memory on linux [closed]

    - by Julia Roberts
    Possible Duplicate: Meaning of the buffers/cache line in the output of free What would be a good setting to free memory on linux? I have 8GB but gets used up so fast. current settings: kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000 vm.dirty_ratio = 40 kernel.pid_max = 4096 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 What settings would I need so linux frees old ram faster?

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • Is it a good idea to have the operating system on a solid state drive?

    - by Kenji Kina
    There is something I don't quite understand. I know a SSD helps with OS load times, but I'm not sure if all this boost is only noticeable/interesting when booting, or gives an all around considerably better experience thereafter. I am interested in having a quick and responsive environment after booting, which leads me to think that it'd be better to spend the SSD capacity in my most used apps (and the page file? Another inside question) and not the OS itself. This, of course, means that I don't know just how much the OS reads/writes its files during normal usage. So, how good an idea is it to dump the whole 20GB+ of Windows 7 OS into the SSD (considering the hefty price per GB of SSD capacity) if I can put up with the usual hard disk boot times? Would I be missing on a lot if I didn't?

    Read the article

  • How do I fix a super slow MacBook?

    - by MakingScienceFictionFact
    I'm running a black MacBook 4.1. Intel Core 2 Duo @ 2.4 GHz, 2 GB RAM, 250 GB hard disk drive, bus speed is 800 MHz. It's about three years old in excellent shape externally. I treat this thing like a baby. It used to run awesome, but now it's super slow at everything. I get the spinning pizza of death constantly. It takes a long time to boot up or load any program, even Safari and iTunes. iPhoto is terribly slow. The Internet doesn't work properly and it reminds me of a buggy PC. I've formatted it and re-installed Mac OS X 10.6 (with all updates), and I've done the disk repairs process. As an iOS developer this is driving me crazy, but luckily I have an iMac to work on in the day which is fast. I'm ready to format it again, but that didn't work last time. After the last format, I copied back files from an external drive so maybe the offending files were hidden in there somewhere. Here are the hard disk drive and RAM specifications. It is upgrade-able to 4 GB of RAM. Hard disk drive: The Fujitsu Mobile MHY2250BH is a 250 GB, standard hard disk drive. Its burst transfer rate is 150 Mbyte/s. This is a 5400 RPM drive and comes with an 8 MB buffer. RAM: two sticks of 1 GB DDR2 SDRAM, speed: 667 MHz.

    Read the article

  • What Is The Proper Laptop Battery Care While Running Laptop Solely On Battery?

    - by Boris_yo
    Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. Since so far I always run laptop on battery, I question the proper usage to prolong battery life. Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger. The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged. Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Dell Latitude E6420 Windows 7 64-bit

    Read the article

  • Is my large Windows folder slowing down my machine?

    - by Moses
    I have a problem with my Windows installation running very slow and my Windows folder being too large. I thought that the problems are related. My Windows folder is 17.4 GB I have 1807 folders totalling 2.4 GB that are prefaced with a $. My System32 folder is 1.55 GB My Microsoft.NET folder is 654 MB – I don't know what if any programs I have that are using it. My Service Pack folder is 568 MB. The Software Distribution folder is 536 MB The ie8updates folder is 380 MB. How can I reduce the size of these folders and could their size be why I am running do slow?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >