Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 41/708 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • linux migration/N high cpu consumption

    - by Alexander
    on my linux appliance based on 3.0.0-14 kernel I got: RPN:/tmp# ps axuf | grep migration root 6 92.9 0.0 0 0 ? S Apr23 2788:33 \_ [migration/0] root 7 99.7 0.0 0 0 ? S Apr23 2993:20 \_ [migration/1] my top is RPN:/tmp# top -b -n1 top - 12:03:41 up 2 days, 2:18, 5 users, load average: 25.76, 25.26, 24.73 Tasks: 171 total, 1 running, 168 sleeping, 0 stopped, 2 zombie Cpu(s): 14.0%us, 12.6%sy, 0.8%ni, 72.0%id, 0.3%wa, 0.0%hi, 0.3%si, 0.0%st Mem: 1543032k total, 1264728k used, 278304k free, 25308k buffers Swap: 0k total, 0k used, 0k free, 183168k cached My question: why processes "migration/N" take so much CPU?

    Read the article

  • Exchange 2010 install locks out high level accounts

    - by tearman
    Basically, when we installed Exchange 2010 alongside our Exchange 2003 server (we assume), this is what caused our problem. The Exchange 2010 server is not active, just running on the domain. What's actually going on is that user groups like Enterprise Admins are getting a single deny flag on Full Control over mailboxes currently residing on the Exchange 2003 server which is preventing any of us from making changes. It says these permissions are inherited from the Parent Object, but we have no idea which one that is. Any idea on how to go about fixing this?

    Read the article

  • What RAID level for a backup server?

    - by ispirto
    I'm building a server with 12 x 3TB disks to use daily backups. I'm thinking to use RAID50 to get a good 27TB usable space. The disks will be used brutally to backup 9 servers with 1.5TB of data once a day. I'll keep the backups for 2 days. So for each server I'll have 3TB of separate partitions. Do you think this kind of huge backups would stress the disks too much and make them fail? Should I better go with RAID10? Oktay

    Read the article

  • Cname to multi-level heroku subdomain

    - by user123424234
    I'm trying to create a cname that points from my custom domain (s.mydomain.com) to a multilevel subdomain hosted on heroku (me.myapp.herokuapp.com). I've created the Cname s.mydomain.com with the value me.myapp.herokuapp.com. When I go to s.mydomain.com it does not route to me.myapp.herokuapp.com, instead I get: method=GET path=/ host=s.mydomain.com dyno=web.1 queue=0 wait=0ms connect=4ms service=18ms status=404 It's possible I'm not fully understanding how this Cname should be setup. My desired outcome is for s.mydomain.com to act as if it were at me.myapp.herokuapp.com.

    Read the article

  • SonicWall HA "gotchas"?

    - by Mark Henderson
    We're looking to move away from PFSense and CARP to a pair of SonicWall NSA 24001 configured in Active/Passive for High Availability. I've never dealt with SonicWall before, so is there anything I should know that their sales guy won't tell me? I'm aware that they had an issue with a lot of their devices shutting down connectivity because of a licensing fault, and they have an overtly complex management GUI (on the older devices at least), but are there any other big "gotchas" that I need to be aware of before committing a not insubstantial amount of money towards these devices? 1If you're outside the US, the SonicWall global sites suck balls. Use the US site for all your product research, and then use your local site when you're after local information.

    Read the article

  • How to debug high memory usage by registry?

    - by bkr
    I have a windows 2008 r2 server running ADFS2 and some web apps that is having issues with low memory. Digging around I found that the 5.5 GB were being used under Kernel Memory (paged). Further digging with Poolmon, I discovered that the majority of that (5GB+) was being used by CM - configuration manager. Also known as the registry. I'm really now sure how to tell why the registry is using so much memory however, or how to release it? Looking at the physical registry files they don't appear that large. EDIT #1 Using the powershell script @ http://jdhitsolutions.com/blog/2011/05/get-registry-size-and-age/ confirms what I saw looking at the physical registry files, that they're relatively small Computername : (removed) Status : OK CurrentSize : 67 MaximumSize : 2048 FreeSize : 1981 PercentFree : 96.728515625 Created : 4/1/2011 11:38:02 AM Age : 454.23:41:28.2540682

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high

    - by Gk.
    I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Xorg: How can I map AltGr to the CapsLock Key (to toggle 3rd level symbols)

    - by basweber
    Hi, as many others I don't need Capslock. I want to reassign it to have the function of AltGr. I use Kubuntu 9.10 but I think there must be a solution which is distribution independent. I already tried to use setxkbmap or xmodmap. Using xmodmap at least I managed that the CapsLock key to behaves like the Delete key by following this description. But I could not achieve assigning the AltGr behavior to CapsLock.

    Read the article

  • Server 2003 RAS Server Utilising High WAN Traffic

    - by Joe Sergeant
    We have Routing and Remote Access configured on Server 2003 (also our primary domain controller), allowing users to connect in remotely to access files, email, etc. With one user, the RAS Server is constantly sending data to that user's remote computer. From 9am this morning it has transferred almost 800MB. The user isn't transferring any files remotely, certainly not enough to total 800MB anyway. None of the other remote users have had this issue. We have ensured that the user in question has "Use default gateway on remote network" disabled for both IPv4 and IPv6 and we are fairly confident that Offline Files isn't trying to synchronise with the server remotely, too. My question is two-fold. Firstly, has anyone had a similar experience? Secondly, what would be the best software to discover exactly what data is being sent to the remote user?

    Read the article

  • Server memory issues, and expected level of service from hosting company

    - by Greg
    I'm involved in maintaining an Ubuntu VPS which runs our django websites (nginx/apache/mod_wsgi) and we've been having some memory spikes which have either caused the database to die, or induced kernel panic when the memory management system can't find any killable processes. I'm working on fixing the memory spikes, but I'm wondering whether there's anything I can do to better deal with the problem if it occurs again. Are there any tools I could use to detect the memory spikes and then, say, kill the offending process and email the server admin to fix it up? Killing off one website so that the server can remain operational is certainly preferable to the whole thing falling over. Also, we were charged $600 for after-hours service because we had to get the hosting company to restart the server - is this standard practice among hosting companies? Another provider I work with provides a panel with which I can stop and start the server myself, and given that a restart was all that was needed, $600 seems mightily excessive. (That's NZD, it's around $445 USD)

    Read the article

  • Handler_read_rnd is too high (more than 2GB)

    - by pnm123
    Hello, I am running an advertising program and there are some SELECT, UPDATE and DELETE queries when showing ads. Sometimes, displaying ads is fast but sometimes it is too slow. At this time, it is slow and Handler_read_rnd and Handler_read_rnd_next is as mentioned below. Handler_read_rnd 2,844.68 M Handler_read_rnd_next 2,945.63 M How can I speed-up displaying ads (decreasing Handler_read_rnd and Handler_read_rnd_next) Thank you, pnm123 PS: Currently there are 7,068,528 rows on the advertising program's database.

    Read the article

  • Tweaks to make Cleartype better at high resolutions?

    - by ULTRA_POROV
    Cleartype is great when displaying small text (say 10-16px). However when you display something above 20px it starts looking like mud. Just compare it to Photoshop. Photoshop rendering at small size is not very impressive, too blurry. But if you compare it at 20px, Photoshop wins all the time. Cleartype looks jaggy around the edges, almost like there is no Cleartype at all. Can this be fixed, or is it just the way Cleartype is?

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • BIND: forward 1st level zone

    - by raven
    First of all: sorry for the language, English is not my primary language. I have star-like DNS structure with many filials (more that 2): ^ | v filialNS_1.filial_1.city.local <---- ns.main.city.local <---- filialNS_2.filial_2.city.local ^ | v ns.mail.city.local is slave of all filials zones filialNS_1 is master of filial_1.city.local filialNS_2 is master of filial_2.city.local filialNS_N is master of filial_N.city.local I want to: serve DNS queries for xxx.filial_N.city.local with filialNS_N.filial_N.city.local forward all queries for xxx.xxx.xxx.local from filialNS_N to ns.main.city.local forward other queries to our provider's DNS on filial (or google-public-dns or anything else) FILIAL CONFIG named.conf zone "filial_1.city.local" { type master; file "/etc/namedb/dynamic/filial_1.city.local"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "2.76.10.in-addr.arpa" { type master; file "/etc/namedb/dynamic/2.76.10.in-addr.arpa"; allow-update { key DHCP_UPDATER; }; allow-transfer { <ns.main.city.local IP address> }; }; zone "local." { type forward; forward only; forwarders { <ns.main.city.local IP address> }; }; nslookup server.filial_1.city.local - works fine nslookup server.main.city.local Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find server.main.city.local: NXDOMAIN Where am I going wrong?

    Read the article

  • High memory utilization with firebird + windows server 2008 r2

    - by chesterman
    i have a Windows Server 2008 R2 (64bit) running a 64bit installation of Firebird 2.1.4.18393_0 in a 4GB phisical server. After a while, the task manager show that all memory is used, but the sum of the memory of all process does not stack to the half of the memory. Unfortunally, it's start swapping. Using RAMMAP, i can see that my entire database file is mapped into the memory. This only occours in windows server 2008 r2 and windows 7 64 bit. i can use firebird 32 or 64bit installations, doesn't matter. How can i prevent this? Why this only occours in w2k8r2 and w7? tks in advance ** UPDATE Aparently, this occours by the use of all memory by the file system cache. The microsoft documentations explain that this WAS a issue in windows xp, 2k3, vista and 2k8, but it was solved in 7 and 2k8r2. also adds that this issue is more common in 64bit hosts. (http://support.microsoft.com/kb/976618) there are some tools (DynCache, setcache and the Get/SetSystemFileCacheSize system calls from windows API) that allows me to fix a upper limit to the memory usage by the fscache, but the documentation argues that i should not do this in w2k8r2 because it will severely impact on the overall system performance. anyway, i tried, the performance remained the same shit and the use of the page file remained, although there is now more the 1gb of free memory.

    Read the article

  • Salary Survey Entry Level network position [closed]

    - by will
    Hello, I started interning with a company about 5 months ago and for the past 7 months I have been a normal part time employee. This week I have a review, where I am hoping to get a raise. I started at $8 interning, and now I'm up to $13. What I am trying to figure out is how to survey what others are making in similar positions so I can take it to the review as a base number. here are my thoughts on my position right now. Review Thoughts My Qualifications: • Associates of applied Science - IT network Specialist • CompTIA A+ certification • CompTIA Server+ certification • CompTIA Network+ certification • Currently pursuing Cisco certifications • Junior status at Insert college Pursuing Bachelors in Information Technology with an emphasis in Networking. 3.4 GPA • 1 year of working at Insert Company. My contributions to Insert Company • Offering near fulltime through semester and fulltime through summer. • Ability to work after hours and on weekends • Developed and support helpdesk system • Set up and maintain Update server to keep desktop clients up to date • Deployed and maintain antivirus solution for end users • Assist with main projects such as SAN, Virtualization, and network survey. Any tips on determining an asking number would help. I was thinking $17-$18, am I way off here? • Migrating end user stations to Windows 7 (current project) • Developing imaging solution for Desktop PCs (current project)

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Tar dereference only 1 level

    - by Bart van Heukelom
    I use the following pseudo-script to create a TAR of my installed software mkdir tmp ln -s /path/to/app1/bin tmp/app1 ln -s /and/path/going/to/the-app-2 tmp/app2 tar -c --dereference -f apps.tar tmp I need the --dereference option here to follow the links I just made in tmp. The reason I make the links in the first place is to store the directories with a different name in the archive than they have on the filesystem. Until now it has worked fine. However, I now have the situation that /path/to/app1 also contains links, and those I don't want to follow. Is this possible with some changes to the tar command? Or do I need to completely switch around the way I build the archive?

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • Free or very Cheap VPN server software with high concurrent users count

    - by Martin KS
    I'm hoping to implement a VPN whereby about 200 concurrent users can log in to briefly access my network. I had a look at OpenVPN and this seemed excellent, but was hoping that there would be a less costly option on a per-concurrent-user basis. I've no need for particularly strong security, and my only other requirement is that I would need to be able to add users in bulk via csv or similar. Does anyone have any suggestions?

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >