Search Results

Search found 4938 results on 198 pages for 'unix timestamp'.

Page 53/198 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How to change the mail domain server so it's not displaying IP? Changing [email protected] to [email protected]

    - by Pavel
    Hi guys. I'm kinda a noob as a server admin so please bear with me. I've installed postfix mail server and everything is working fine but the 'from' box is displaying [email protected]. I want to set it up so it displays domainname.com instead of IP. I just hope you know what I mean. My main.cf in postfix folder looks like this: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.thevinylfactory alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.thevinylfactory.com, thevinylfactory, localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Can anyone help me with this one? If you need any more details please let me know. Thanks in advance!

    Read the article

  • Cron process not starting

    - by vkris
    I have an ec2 image created with cron jobs. These jobs fail to run; I discovered the cron process in itself has not started. So, I included /usr/sbin/cron in /etc/rc.d/rc.local and created another image. But still for some reason the cron process does not start on bootup. If I restart the machine, the cron process runs. It doesn't run when it boots up! Any reason why this is happening? Also, is there any other alternatives for this ?

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • DHCP server inside a virtual machine can't see other machines

    - by William
    Hi, I setup a private network from virtual machines and one of the machines is the DHCP server for the group. I want to specify a next-server for the DHCP server but I'm having trouble connecting to any of the machines that I lease IPs to. I'm just trying to do a simple ping/ssh to 10.0.0.252 (a machine with a lease) but it doesn't seem to respond. Any advice? I'm assuming I need to be able to connect to my next-server but maybe I'm wrong. Thanks.

    Read the article

  • Encoding with FFmpeg using a FIFO

    - by Ashot Martirosyan
    Hello everyone. I'm trying to convert Flac audio file to AAC file using command line. So I wrote this ffmpeg -i input.flac temp.wav faac -q 120 -o output.m4a temp.wav It's working fine. Now I want to do the same using fifo, so I'm writing this mkfifo temp.wav ffmpeg -i input.flac temp.wav & faac -q 120 -o output.m4a temp.wav And it's freezing. So could you tall me what I'm doing wrong. Thanks a lot, and sorry for my English.

    Read the article

  • Logging commands executed by remote shell scripts

    - by user145836
    I've noticed that when running a script that connects to a number of our servers (to essentially run batch commands) that the commands aren't logged in the user's .sh_history or .bash_history files. Is there a place where this is logged (assuming the script itself isn't doing the logging and I'm not tee'ing the output anywhere)? I'm talking specifically about AIX, but I would assume this question applies to all the *nix flavors. Thanks!

    Read the article

  • Rpm removal does not remove delivered dirs and leaves garbage

    - by Jim
    I deliver an application via an RPM. This application delivers various directories and files. E.g. under /opt/internal/com a file structure is being copied. I was expecting that on rpm -e all the file structure delivered under /opt/internal/com will be removed. But it does not. There are directories in the file structure that are non-empty. Is this the reason? But these (non-empty) directories were created by the RPM installation. So I would expect that they would be "owned" by RPM and removed automatically. Is this wrong? Am I supposed to remove them manually?

    Read the article

  • Autosaving on emacs or xemacs files (preferably on loss of focus)

    - by Spencer
    Ideally I want to replicate with emacs functionality from TextMate, whereby on loss of focus i.e. I click away from the buffer, my file saves. If this isn't possible, I want to customize emacs so that it will autosave the file for every character I write. When I say this I don't mean I want to autosave to the ~ backup files. I want to save the file I am currently working on. I am working on a Fedora VM. Note I am not looking for a backup or autosave. I want the file I am actually in to save, so that if I loaded the html file I am editing in a web browser it would reflect my new changes without me having to explicitly change it.

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • After setting ulimit to unlimited, I am not able to login to machine

    - by user419534
    In one of requirment, I had to set ulimit on one of my machine to unlimited. For this I changed following in /etc/security/limits.conf as below # End of file oracle soft nofile unlimited oracle hard nofile unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 * soft nofile unlimited * hard nofile unlimited and changed /etc/profile if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p unlimited ulimit -n unlimited else ulimit -u unlimited -n unlimited fi fi I logged out. I am not able to connect ot machine at all. could you please someone help on this.

    Read the article

  • Creating a link to name changing directory

    - by groove1534
    I have an Ubuntu 12.04 installed using wubi + Win7. I'm trying to create a link to "my documents" directory which located in my C drive: C:\Users\Myuser\My Documents\ Since the Ubuntu is installed in D:\, which is the "host", my C drive is accessible via /media/some_changing_hex. This hex get changed each time I restart my machine. So I need, somehow, to create a link that uses regex OR a link that somehow gets the the first (in this case - only) subdirectory in /media (something like all_subdirectories[0]). So how do I do that?

    Read the article

  • If a change the CPU, must I reinstall the OS?

    - by dag729
    Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250. I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same. Do you think I will encounter problems?

    Read the article

  • Help updating cron entry using regular expressions

    - by Uday
    hi I am trying to update a cron entry NOT by using crontab -e but by shell commands. For example the cron entry is like this: 10 * * * * /home/localuser/foo.sh -b 1 -h 4 > foo_output.sh 2>&1 No i need edit the command line parameters part ONLY i.e -b 1 -h 4 to something else which will be coming in from the user. First thing would be to write the crontab to a tmp file and then manipulate that temp file. Now, is there an easy way to edit that line using SED or something? The crude way wud be to delete that entire line, write a new line with the entire expression and then load that into the cron. I am not very good with regular expressions. My system supports sed -i so was thinking this could be done in a single line command. Thanks in advance

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • /dev/shm (shared memory) on linux

    - by Kirzilla
    Hello, Let's imagine that we have 8Gb of RAM on server. I'm mounting /dev/shm with 4Gb on board. mount -o remount,size=4G /dev/shm Will this memory be strictly reserved for shared memory or if /dev/shm is empty this memory could be used by regular applications (web server, php etc.)? PS:Sorry for my English. I'm asking it because I've just checked df -h and found tmpfs 6.0G 0 6.0G 0% /dev/shm on 8Gb RAM sever. I don't know who made this setup, but it seems to me awful. Thank you!

    Read the article

  • How can I recursively verify the permissions within a given subdirectory?

    - by Mike
    I'd like to verify that nothing within /foo/bar is chmod 777. Or, alternatively, I'd like to make sure that nothing within /foo/bar us owned by user1 or in group1. Is there any way I can recursively verify the permissions within a given subdirectory to make sure there aren't any security holes? Notice that I do not want to change all the permissions to something specific, nor do I want to change the owner to something specific, so a recursive chmod or chown won't do it... Thanks!

    Read the article

  • BASH - Run command for each line in output of previous command

    - by user1582375
    All, I am want to request all network services using: networksetup -listallnetworkservices I then want to run the below command for each line in produced from the above command: networksetup -setautoproxyurl "A LINE FROM ABOVE" http://etc... Adiitonally, I only want to issue the setautoproxyurl command for service with "Ethernet" or "Wi-Fi" in the name networksetup -listallnetworkservices | while read line; do networksetup -setautoproxy $line http://etc...

    Read the article

  • pam_ldap.so before pam_unix.so? Is it ever possible?

    - by user1075993
    we have a couple of servers with PAM+LDAP. The configuration is standard (see http://arthurdejong.org/nss-pam-ldapd/setup or http://wiki.debian.org/LDAP/PAM). For example, /etc/pam.d/common-auth contains: auth sufficient pam_unix.so nullok_secure auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth requiered pam_deny.so And, of course, it works for both ldap and local users. But every login goes first to pam_unix.so, fails, and only then tries pam_ldap.so successfully. As a result, we have a well-known failure message for every single ldap user login: pam_unix(<some_service>:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<some_host> user=<some_user> I have up to 60000 of such log messages per day and I want to change the configuration so, that PAM will try ldap authentication first, and only if it fails - try pam_unix.so (I think it can improve the i/o performance of the server). But if I change common-auth to the following: auth sufficient pam_ldap.so use_first_pass auth sufficient pam_unix.so nullok_secure auth requiered pam_deny.so Then I simply can't login anymore with local (non-ldap) user (e.g., via ssh). Does somebody knows the right configuration? Why Debian and nss-pam-ldapd have pam_unix.so at first by default? Is there really no way to change it? Thank you in advance. P.S. I don't want to disable logs, but want to set ldap authentication on the first place.

    Read the article

  • Title: Better logging for cronjob output

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >