Search Results

Search found 27515 results on 1101 pages for 'embedded linux'.

Page 463/1101 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

  • Script to run chown on all folders and setting the owner as the folder name minus the trailing /

    - by Shikoki
    Some numpty ran chown -R username. in the /home folder on our webserver thinking he was in the desired folder. Needless to say the server is throwing a lot of wobbelys. We have over 200 websites and I don't want to chown them all individually so I'm trying to make a script that will change the owner of all the folders to the folder name, without the trailing /. This is all I have so far, once I can remove the / it will be fine, but I'd also like to check if the file contains a . in it, and if it doesn't then run the command, otherwise go to the next one. #!/bin/bash for f in * do test=$f; #manipluate the test variable chown -R $test $f done Any help would be great! Thanks in advance!

    Read the article

  • Barnyard Service - MySQL Error

    - by SLYN
    I installed barnyard2 and saved as a service. When I run service barnyard2 start, Barnyard2 is failed. After I run tail -100 /var/log/messages and I encounter a fault like this. ERROR database: 'mysql' support is not compiled into this build of snort#012 Aug 22 11:52:06 barnyard2[25771]: FATAL ERROR: If this build of barnyard2 was obtained as a binary distribution (e.g., rpm,#012or Windows), then check for alternate builds that contains the necessary#012'mysql' support.#012#012If this build of barnyard2 was compiled by you, then re-run the#012the ./configure script using the '--with-mysql' switch.#012For non-standard installations of a database, the '--with-mysql=DIR'#012syntax may need to be used to specify the base directory of the DB install.#012#012See the database documentation for cursory details (doc/README.database).#012and the URL to the most recent database plugin documentation. Aug 22 11:52:06 barnyard2[25771]: Barnyard2 exiting What sould I do for solving this problem? When I installed Barnyard2, I used these commands: # ./configure --with-mysql --with-mysql-libraries=/usr/lib64/mysql # make ; make install (My System is CentOS 6.5 x86_64.)

    Read the article

  • Set LD_LIBRARY_PATH and CLASSPATH on cluster nodes before running a hadoop job

    - by Ashish Sharma
    I need to set LD_LIBRARY_PATH and CLASSPATH before running a job a cluster. In LD_LIBRARY_PATH i need to add location of some jars which are required while running the job, As these jars are avaiable at my cluster, similar with CLASSPATH. I have a 3 NODE cluster, I need to set this LD_LIBRARY_PATH and CLASSPATH for all the 3 data nodes so that the following jar are available while running the job

    Read the article

  • Backing up to smaller drive

    - by Dave
    In a few hours I'll have a new 500GB Sony laptop, filled with the usual Sony rubbish which I'll promptly be replacing with Ubuntu or Crunchbang or something. However, first I want to make a full clone of the drive (including recovery partitions), should I wish to return it to Sony or sell it on in its factory state. The problem is that the only backup drives I have are less than 500GB - the biggest I have is 250GB or so! So I need to backup and compress on-the-fly. What's the best way to do this? Presumably dd piped into gzip would do the trick, or does anyone have any other suggestions to accomplish this?

    Read the article

  • Right solution for /etc/hosts file reset on reboot

    - by user846226
    i've just installed funtoo and after setting the FQDN on /etc/conf.d/hostname i noticed when setting a list of aliases in /etc/hosts file it get overwtiten on each reboot. Someone points to set the aliases to 127.0.0.2 ip address but that's not a valid solution for me. Could someone point me to the file where i should place entries like 127.0.0.1 local.foo 127.0.0.1 local.bar in order to make them persist in /etc/hosts after rebooting? Thanks! PD: I think openresolv could be the one who is overwritting the file.

    Read the article

  • Setting differing ACLs on directories and files

    - by durandal
    Quick ACL question: I want to set up default permissions for a file share so that everyone can rwx all of the directories and so that all newly created files are rw. Everyone who is accessing this share is in the same group, so this isn't a concern. I have looked at doing this via ACLs without changing all of the users' umasks and such. Here are my current invocations: setfacl -Rdm g:mygroup:rwx share_name setfacl -Rm g:mygroup:rwx share_name My problem is that while I want all of the newly created sub-directories to be rwx, I only want newly created files to be rw. Does anyone have a better method to achieve my desired end-result? Is there some way to set ACLs on directories separately from files, in a similar vein to "chmod +x" vs. "chmod +X"? Thanks

    Read the article

  • scsi and ata entries for same hard drive under /dev/disk/by-id

    - by John Dibling
    I am trying to set up a ZFS pool using 4 bare drives which I have attached to my Ubuntu system via a SATA hot swap backplane. These are Hitachi SATA drives. When I list the contents of /dev/disk/by-id, I see two entries for each drive: root@scorpius:/dev/disk/by-id# ls | grep Hitachi ata-Hitachi_HDS5C3030ALA630_MJ1323YNG0ZJ7C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1064C ata-Hitachi_HDS5C3030ALA630_MJ1323YNG190AC ata-Hitachi_HDS5C3030ALA630_MJ1323YNG1DGPC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG0ZJ7C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1064C scsi-SATA_Hitachi_HDS5C30_MJ1323YNG190AC scsi-SATA_Hitachi_HDS5C30_MJ1323YNG1DGPC I know these are the same drives because I wrote down the serial numbers, and all the other drives in this system are either Seagate or WD. The serial number for the first one, for example, is YNG0ZJ7C. Why are there two entries here for each drive? More to the point, when I create my ZFS pool which one should I use; the scsi- one or the ata- one?

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • Prevent rmdir -p from traversing above a certain directory

    - by thepurplepixel
    I hacked together this script to rsync some files over ssh. The --remove-source-files option of rsync seems to remove the files it transfers, which is what I want. However, I also want the directories those files are placed in to be gone as well. The current part of the find command, -exec rmdir -p {} ; tries to remove the parent directory (in this case, /srv/torrents), but fails because it doesn't have the right permissions. What I'd like to do is stop rmdir from traversing above the directory find is run in, or find another solution to get rid of all the empty folders. I've thought of using some kind of loop with find and running rmdir without the -p switch, but I thought it wouldn't work out. Essentially, is there an alternative way to remove all the empty directories under the parent directory? Thanks in advance! #!/bin/bash HOST='<hostname>' USER='<username>' DIR='<destination directory>' SOURCE='/srv/torrents/' rsync -e "ssh -l $USER" --remove-source-files -h -4 -r --stats -m --progress -i $SOURCE $HOST:$DIR find $SOURCE -mindepth 1 -type d -empty -prune -exec rmdir -p \{\} \;

    Read the article

  • QNAP TS-419p as a VPN Gateway?

    - by heisenberg
    Hello, I am hoping one of you might be able to help. I want to make files stored on shared folders on a QNAP TS-409p available to users over a VPN link. How is the possible? Can someone explain what I need to do. What do I need to do at the router and what do I need to do on the QNAP NAS? Effectively, what I want do do is use the built in Windows vpn client to connect to my home network and then be able to browse the shared folders. Thanks in advance.

    Read the article

  • What is the meaning of the 'Personalities' feature under /proc/mdstat

    - by drcelus
    On some systems I see this : Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] md1 : active raid1 sdb1[1] sda1[0] 10485696 blocks [2/2] [UU] md2 : active raid1 sdb2[1] sda2[0] 477371328 blocks [2/2] [UU] And other systems show : Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204788 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 4193272 blocks super 1.1 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 483985276 blocks super 1.1 [2/2] [UU] bitmap: 0/4 pages [0KB], 65536KB chunk I wonder what is the meaning of Personalities and the impact of having different values.

    Read the article

  • Apache intermittently aborting requests

    - by Adam Phillips
    I have just been dealing with a problem whereby http requests are being aborted, seemingly at random. On any particular page in the website, when you opened a page, a number of the assets (img, css, etc) failed to load. If you refreshed, the page may work fine, the same set of assets may fail to load or different assets may fail to load. The net tab in firefox was returning 'Aborted' in the HTTP status code column for the failed assets, even tho in the case of images, the image previews were still working. There was nothing in any of the apache logs about the requests that failed, however since it seemed to point to an apache issue, we restarted apache. The first time we tried, it made no difference but about 10 minutes later, in the absence of a better solution we tried again. Bizarrely, the problem disappeared immeadiately. So now the site seems to be running fine again but its rather unsettling, both the intermittent nature of the problem and the lack of an explanation for its resolution. Has anyone seen anything like this before and if so did you find out the reason behind it? Many Thanks

    Read the article

  • Microsoft dans le top 20 des contributeurs au noyau Linux, 75% du code fourni par des développeurs rémunérés

    Microsoft dans le top 20 des contributeurs au noyau Linux 75% du code fourni par des développeurs rémunérés La publication de la liste des contributeurs au noyau Linux vient confirmer une fois de plus que Microsoft voit en l'open source une opportunité plus qu'une menace. L'éditeur qui a déjà pris l'initiative de passer plusieurs de ses projets en open source et d'accepter pour la première fois des contributions externes pour certains, se hisse dans le top 20 des contributeurs au noyau Linux depuis la version ...

    Read the article

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

  • How do you get autofs and updatedb to work together?

    - by Veek.M
    /etc/my.misc sda1 -fstype=ntfs,user,exec :/dev/sda1 sda3 -fstype=ntfs,user,exec :/dev/sda3 sda4 -fstype=ntfs,user,exec :/dev/sda4 /etc/auto.master /my /etc/my.misc --ghost When I run locate .pdf, I get nothing because though the mount points (sda1, sda2, ..) are created in /my - there's nothing in them till I access them. Unfortunately this is not good enough for updatedb and it purges its cache of /my/sdaX files. How do I prevent/solve this problem?

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • How to map Ctrl + ',' to greater key( '>') or Ctrl + '.' to less key( '<' ) using xmodmap?

    - by Maxrunner
    So im trying to creating a combination of keys to generate the ISO key for Portuguese layout, the key in question is the <, pressing it normally will generate the '<' character, pressing + shift will generate the ' ' character. So i'm trying to create a combination while using xmodmap, and i want this to work for all programs.I've been searching on Google and came up with this example for Control + P = Up: Control + p = Up arrow example The example for that behaviour is: xmodmap -e "keycode 33 = p P Up" keycode 33 matches the p key, so where does control comes up in that command? regards,

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • How can I avoid hard-coding YubiKey user identities into the PAM stack?

    - by CodeGnome
    The Yubico PAM Module seems to require changes to the PAM stack for each user that will be authenticated with a YubiKey. Specifically, it seems that each user's client identity must be added to the right PAM configuration file before the user can be authenticated. While it makes sense to add authorized keys to an authentication database such as /etc/yubikey_mappings or ~/.yubico/authorized_yubikeys, it seems like a bad practice to have to edit the PAM stack itself for each individual user. I would definitely like to avoid having to hard-code user identities into the PAM stack this way. So, is it possible to avoid hard-coding the id parameter to the pam_yubico.so module itself? If not, are there any other PAM modules that can leverage YubiKey authentication without hard-coding the stack?

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >