Search Results

Search found 47394 results on 1896 pages for 'system monitoring'.

Page 80/1896 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to check the result of a script with monit?

    - by matnagel
    Is there a way to check the result of a script with monit? For example if a script returns 0 it is ok, but 1 means failed. My workaround is to call the script with cron and write the result to a file and check the file with monit. But I am sure monit can do it more elegantly?

    Read the article

  • Linux foxboard network monitor

    - by het.oosten
    I want to use a Foxboard a simple network monitor for multiple routers (all routers are connected to the internet). Foxboard is a mini pc with an embedded version of Debian. My idea is to use multiple virtual network devices like this: eth0 192.168.2.10 eth0:1 192.168.3.10 eth0:2 192.168.4.10 I found a nice Python script to ping an external host here (the solution from Ryan Cox): http://stackoverflow.com/questions/316866/ping-a-site-in-python Is it possible to configure Debian to use eth0 when I ping www.site-a.com and eth0:1 when I ping www.site-b.com?

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • SNMP HOSTMIB.MIB not loading?

    - by Eriedor
    Forgive me if the answer is something glaringly obvious but I just can't seem to get access to any OIDs under the HOST branch in SNMP. I've used an SNMP browser to inspect a few of my systems and none of them show a HOST branch under ISO.ORG.DOD.INTERNET.MGMT.MIB-2. Any thoughts as to why? I'm looking to monitor a few computer's hardware resources via SNMP and unfortuantely all such OIDs live under the missing HOST branch, Any thoughts?

    Read the article

  • Linux: Tool to monitor every process, execute-command, shortly, monitor what's happening at the moment

    - by Bevor
    Hello, due to a freeze problem of my Ubuntu 10.10 (it is not isolatable) I though about logging every executable of the kernel somehow in any file to see what happens last when a freeze occures the next time to not lose valuable information. I found acct but this is obviously not what I'm looking for. Actually it logs just user commands and those things. I need something which logs in a much "deeper" level. The best would be some kind of script which records every interrupt. Does anybody know some tool like that?

    Read the article

  • Monitor status while using VNC

    - by kumar
    So after connecting to a vnc server via vnc viewer to my desktop (remotely), Is it possible to know whether the monitor connected to the CPU is switched ON or not. Simply put, from command prompt how do you know whether monitor is ON/OFF from command line. Here, Basically I am bit worried about privacy as my monitor can be viewed by anyone while accessed remotely. Any solution? Obviously there is a option to switch off the monitor while starting the vnc server at remote side but I am looking for a better solution to control monitor(possible??) remotely. Thanks!

    Read the article

  • Need to know who is hogging my bandwidth?

    - by Dev
    I have an ethernet connection to my iMac and with Internet sharing I am broadcasting the wireless network from my mac rather than using a wireless router. I use it to connect other devices wirelessly to the internet. But this makes all the traffic flow through my iMac. I wanted a way to analyze the traffic so that I know what connected devices are hogging the bandwidth at a given time and from which websites? I installed wireshark for mac and played around a little but it seems like an overkill when you first look at it. Can someone please help with few instructions to get what I need or any other way other than using wireshark? Thanks Dev.

    Read the article

  • default page in pnp4nagios

    - by bluszcz
    I am using pnp4nagios along with the nagios. Everything seems to be integrated properly - I have "extra action" icons close to every host and service which links to pnp4nagios graph. However, when I am going to the https://x.x.x.x/pnp4nagios/ it always change the url to the: https://x.x.x.x/pnp4nagios/graph?host=webhost01 How Can I turn off this behaviour? I would like to see on /pnp4nagios/ collected graphs from all servers.

    Read the article

  • How can I see how much bandwidth each Apache Virtual Host is using?

    - by pkaeding
    I have Apache set up to serve several Virtual Hosts, and I would like to see how much bandwidth each site uses. I can see how much the entire server uses, but I would like more detailed reports. Most of the things I have found out there are for limiting bandwidth to virtual hosts, but I don't want to do that; I just want to see which sites are using how much bandwidth. This isn't for billing purposes, just for information. Is there an apache module I should use? Or is there some other way to do this?

    Read the article

  • Monit checking URL follow redirects

    - by beck
    I am looking to use monit to keep an eye on my site. I want it to treat it the site like an external user so am testing the url but it doesn't seem to follow redirects. The content check is being performed on the html of the redirect. #request works: if failed url http://www.sharelatex.com/blog/posts/future.html content == "301" #request fails if failed url http://www.sharelatex.com/blog/posts/future.html content == "actual content" Finding out how to get the url check to follow 30X would be great.

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • What kind of proxy acl rules should be applied?

    - by user42891
    I try to block sites in squid based on this article. Assuming you would want to block access to Yahoo (e.g http://www.yahoo.co.jp, http://www.yahoo.com, http://www.yahoo.co.in), you would ideally want to block all of the above URLs, if I use a regular expression and try to search something called yahoo it seems to get blocked. We are just interested in applying rules which would be most commonly used across all companies (e.g. social networking sites like facebook, orkut), porn sites (e.g. sex), gaming sites (games), movie & song download sites, and sites where they can upload data (e.g. rapidshare) What would be the common set of effective rules in achieving the above?

    Read the article

  • School Management System

    - by BoundforPNG
    I am looking for a school management system to replace a homegrown Access db. It should be able handle the following for both a Primary and Secondary school Scheduling classes Student Enrollment Allow teacher to enter grades and comments Generate transcripts and report cards Handle attendance Handle tuition billing It should store data in a server database like SQL Server and it would be nice to have a web interface. We are open to a commercial system or an Open Source system that comes with support.

    Read the article

  • Using Process Monitor to track registry changes

    - by CChriss
    It seems many people like using Process Monitor to see what changes are being made to the registry during a process. So I downloaded it. I want to see what changes are made in the registry by some config changes I'm making on my computer so I can write them into a vbs script to do them easily. Can someone tell me how to drive Process Monitor to capture the info? In the Help I don't see how to do it. I'm using Windows 7 home Premium 64 bit.

    Read the article

  • Moving Images from Database to File System

    - by msarchet
    So currently in our system we have been storing image files in the database (SQL Express 2005). Unfortunately it wasn't perceived that this would reach the max database size allowed by the SQL Express License. So I have proposed a plan of storing the images in the file system and only indexing where the file is in the database. The plan is to save the root path in our OptionsTable as something like ImagesRoot and then only saving the actual imageID in the table, which is basically a FK from the PK of the record with the image. I have determined that it would be best to then split this down into sub-directories inside of the ImagesRoot based on every 1000 images so basically (ImagedID / 1000)\(ImageID % 1000) (e.g. ImageID is 1999 it would be in %ImageRoot%\1\999). I'm looking for any potential pitfalls of this system and any thing that could be improved as I am already receiving some resistance from the owner of the company who wants everything to be in databases. Along those lines I would also take reasons why it should all be in databases. I should mention we have in place already automated backups that run for all of our customers databases and any files that are generated by our program that are required to be saved over a period of time These are optional but if someone isn't using our system it is expected that they are using their own or data loss isn't our problem (it is if our system fails and they are using it!). Thanks

    Read the article

  • How to find malicious IPs?

    - by alfish
    Cacti shows irregular and pretty steady high bandwidth to my server (40x the normal) so I guess the server is udnder some sort of DDoS attack. The incoming bandwidth has not paralyzed my server, but of course consuming the bandwidth and affects performance so I am keen to figure out the possible culprits IPs add them to my deny list or otherwise counter them. When I run: netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n I get a long list of IPs with up to 400 connections each. I checked the most numerous occurring IPs but they come from my CDN. So I am wondering what is the best way to help monitor the requests that each IP make in order to pinpoint the malicious ones. I am using Ubuntu server. Thanks

    Read the article

  • Properly escaping check_command in nagios

    - by shadyabhi
    When I execute sudo -u nagios /usr/lib64/nagios/plugins/check_by_ssh.sh hostname "check_haproxy -u \"http://localhost:10000/haproxy?stats\;csv\"" it runs perfectly on the server. For this, I have this in my HAProxy.cfg define service { use generic-service hostgroup_name pwmail-ee-oxweb service_description HAProxy-ee servicegroups ssh-dep check_command check_by_ssh!check_haproxy -u \"http://localhost:10000/haproxy?stats\;csv\" contacts sysad,mail-hosting-rt } It doesn't work. Says that Return code of 127 is out of bounds - plugin may be missing. What am I doing wrong?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • gmetric data submitted doesn't follow dmax value

    - by 580farm
    I have a custom script that is querying a metric port for an application that I'm running and submitting parsed values to ganglia via gmetric. The script runs every minute, so I submit the data to ganglia using the following gmetric options: /usr/sbin/gmetric -g ec2 -s positive -t uint32 -d 600 -n "$NAME" -v $VALUE -x 60 But for some reason there are still gaps in the graphing data: Is there something in my formatting that is preventing the dmax/ttl of the last metric received from being honored? Is there anyone who does custom metric collection that has run into this problem before that can shed some insight or provide some tips as to how to best correct this?

    Read the article

  • How can I tell if my live web-server is overloaded?

    - by Nick G
    We have a live webserver which doesn't seem to be performing all that well. It's a Dell PowerEdge machine, a few years old (dual core, 4GB) which is hosting about 20 low-traffic websites. However it doesn't seem to be as fast as it used to be. How can we determine the cause of this? If it's website traffic, I would be expecting high CPU but CPU usage is quite low and hovers around the 15-30% mark except for very brief periods. I'm wondering perhaps, if rather than CPU performance being a problem, perhaps it's disk thrashing due to the constant read/writes of all the small web files and database queries. It has 4x 7200 RPM SATA drives in RAID 5. So is there a way to check that it's not disk thrashing?

    Read the article

  • Two Raid 1s in a single system?

    - by DebAtCQ
    I'm building a Raid array for the first time in my system and I have a question regarding having multiple Raid 1 arrays in a single Win 7 system. I'm a bit of an organizational freak with my data and I currently have two separate hard drives I want to mirror. The new motherboard I'm looking to buy supports Raid, so my question is this: a) can I have more than one Raid 1 array in a single system? b) would I have to buy a separate controller for the second array?

    Read the article

  • Exchange stops working after changing System Time

    - by L.M
    I am currently in a situation where the system time of my windows machine differs 6 hours from the actual local time. I tried chaning the system time of my windows machine 6 hours back to match the actual local time. The issue is, when the system time is changed, Exchange stops working as it wont start anymore. When i change the time back Exchange works again. Here is the error that it shows when im trying to open the management console after changing the system time. The Follwing error occured while attempting to connect to the specified server "servername". The attempt to connect to http://servername/PowerShell using "Kerberos" authentication failed: Connecting to remote server failed with the following error message: Access is denied. For more information, see the about_remote_troubleshooting Help topic. Any Solutions to this problem?

    Read the article

  • What is the best way to compare vhost traffic?

    - by Bob Flemming
    Recently one of my servers has been subjected to malicious ddos attacks. I have about 12 websites hosted on the server which uses name based v-hosting. I am trying to identify which virtual host(s) are getting bombarded with traffic. I have used tools such as iftop which is good for identifying hosts which are consuming lots of bandwidth, and also apachetop which is useful for identifying which resources are being requested on a single v-host. What I really need is a tool which allows me to see the amount of traffic being received by each v-host in real time so I can easily see which v-host is being targeted. Does such a tool exist?

    Read the article

  • Graphing/charting of CPU utilisation [on hold]

    - by Peter
    So nagios can be good at graphing particular resource utilisation or other metrics, but I'm looking for a tool that can output a chart or other graphical representation of how much CPU time/CPU utilisation /all/ services on a server are currently consuming. I think New Relic could probably achieve this to an extent, but I was wondering if there was a popular open source app used for this. In case I am explaining this in a bad way, my actual problem is that I have a shared server with suexec enabled (ie. httpd cgi running under multiple user accounts). I'd like to know which users are using the most CPU during periods of the day.

    Read the article

  • UNIX tool to dump a selection of HTML?

    - by jldugger
    I'm looking to monitor changes on websites and my current approach is being defeated by a rotating top banner. Is there a UNIX tool that takes a selection parameter (id attribute or XPath), reads HTML from stdin and prints to stdout the subtree based on the selection? For example, given an html document I want to filter out everything but the subtree of the element with id="content". Basically, I'm looking for the simplest HTML/XML equivalent to grep.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >