Search Results

Search found 5140 results on 206 pages for 'crazy bash'.

Page 65/206 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Over writing output to a text file

    - by Naveen Gamage
    I'm trying to write wget command's output to a text file, but it always appends to the text file. #!/bin/sh download() { local url=$1 echo -n " " wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \ sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}' echo " DONE" } file="$1" echo -n "Downloading $file:" download "$file" > file.log I tried using using > won't work, where am I doing wrong?

    Read the article

  • Determine hostname of connected ethernet switch

    - by Beastcraft
    I've a bonding on two interfaces. I'd like to monitor wether they are connected to different switches (the switches have hostnames). ethX should be connected to switchX and ethY to switchY. Currently I'm checking this with following command: tcpdump -vv -s0 -i ethX ether host 01:00:0c:cc:cc:cc After a minute it prints out the hostname (and much more information) from the switch. Are there any other solutions to monitor this? Greeting

    Read the article

  • tail -f updates slowly

    - by Cliff
    I'm not sure why, but on my Macbook Pro running lion I get slow updates when I issue "tail -f" on a log file that is being written to. I used to use this command all the time at my last company but that was typically on Linux machines. The only thing I can think of that would possibly slow the updates are buffering of output and/or maybe a different update interval on a Mac vs. Linux. I've tried with several commands all which write to stout relatively quickly but give slow updates to the tail command. Any ideas? Update I am merely running a python script with a bunch of prints in it and redirecting to a file vi " my output.log". I expect to see updates near real time but that doesn't seem to be the case.

    Read the article

  • cannot unset env variables from script

    - by w00t
    Hi, I am trying to unset all environment variables from within a script. The script runs fine but if I run env it still shows all the variables set. If I run the command from CLI, it works and the variables are unset. unset `env | awk -F= '/^\w/ {print $1}' | xargs` Have any idea how to run this from a script? Also, have any idea how to source /etc/profile from a script? This doesn't work either. I need to set variables with same names but different paths, depending on the instances my users need.

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

  • How can I run a job when the server load is low?

    - by jberryman
    I have a command that runs a disk snapshot (on EC2, freezing an XFS disk and running an EBS snapshot command), which is set to run on a regular schedule as a cron job. Ideally I would like to be able to have the command delayed for a period of time if the disk is being used heavily at the moment the task is scheduled to run. I'm afraid that using nice/ionice might not have the proper effect, as I would like the script to run with high priority while it is running (i.e. wait for a good time, then finish fast). Thanks.

    Read the article

  • Can not understand this script

    - by Jim
    Can someone help me understand this script? It is from sysconf_add and I am new to scripting. I need to do something similar. function add_word() { local word=$1 local word_quoted=$2 if ! word_present; then $debug && cp $file $tmpf sed -i -e "${lineno} { s/^[[:space:]]*\($var=\".*\)\(\".*\)/\1 $word_quoted\2/; s/=\" /=\"/ }" $file $debug && diff -u $tmpf $file else echo \"$word\" already present fi # some balancing for vim"s syntax highlighting }

    Read the article

  • Run command before and after printing with CUPS?

    - by leto
    Hello, this is a home setup. A central printer server (Linux) manages the queue, a HP 2430DTN is attached to it via 100Mbit/sec Ethernet. The printer is hooked up to a managable power source. A shell script watches the queue on the server (lpstat -o) and turn on the printer when there is a job. If the queue is empty for 10 minutes it turns the printer off. Now this setup messes up, stops the printer etc. after a couple of weeks and is in general "not so reliable". I now know how to change the stop-printer thing, but: Is there a way to run my turn printer on script and turn printer off script directly from cups without watching the queue? That would be so cool!

    Read the article

  • Ubuntu + Unable to Edit .bashrc file because of ReadOnly

    - by Napster
    To Remove Issue of WARNING: Unable to verify SSL certificate for api.heroku.com To disable SSL verification, run with HEROKU_SSL_VERIFY=disable By Googling I got few solution. One of them is added HEROKU_SSL_VERIFY=disable to .bashrc. Unfortunately, I am not able to edit that file, gives an error of 'readonly' option is set (add ! to override) !wq is used in place of :wq, but no response. Please suggest me to resolve this issue... Thanks

    Read the article

  • Is it possible to code on two different computers simultaneously?

    - by Muhammad
    I want to work with another programmer and I want the source code to be live in real-time on both of our screens. Is this possible on the Mac OS x or Linux? We're going to be using OS X but occasionally we might need to add an Ubuntu computer too. Is there a way I can do this using ssh, any shell based program, or even a good GUI? I thought Coda might be capable of this but it's not really working. Anyone ever do this? I'm not look for a git/svn/or any other version control system. This is more of a live coding session. :)

    Read the article

  • how to manage credentials/access to multiple ssh servers

    - by geoaxis
    I would like to make a script which can maintain multiple servers via SSH. I want to control the authentication/authorization in such a manner that authentication is done by gateway and any other access is routed through this ssh server to internal services without any further authentication/authorization requirements. So if a user A can log into server_1 for example. He can then ssh to server_2 without any other authentication and do what ever he is allowed to do on server_2 (like shut down mysql, upgrade it and restart it. This could be done via some remote shell script). The problem that I am trying to solve is to come up with a deployment script for a JavaEE system which involves databases and tomcat instances. They need to be shutdown and re-spawned. The requirement is to have a deployment script which has minimal human interaction as possible for both developers and operation.

    Read the article

  • How can I remove old log entries from a log file and archive them somewhere else in Linux?

    - by Mike B
    CentOS 4.x I apologize in advance if this is not the appropriate place to ask this question. It pertains to a linux server / IT admin task. I've got a log file on an old CentOS 4.x server and I want to remove log entries older than a certain date and place them in a new file for archive. Here's an example of the log format: 2012-06-07 22:32:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:03,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:04,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:10,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:12,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:15,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:32:40,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:32:58,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| 2012-06-07 22:33:01,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123|blah blah blah 2012-06-07 22:33:02,289 ABC:0|Foo|Foo2|4.4|1234|Some Event|123| Essentially, I'm looking for a one-liner that will do the following: Find any events older than a provided YYYY-MM-DD and remove them from the primary log file. Take the deleted events from step 1 and put them in a new log file (Optional) Compress the new archive log file holding the deleted events. I'm aware that there are log rotate tools that do this but this should just be a one-time task so I'd prefer not to set that up. Additional notes: If the date part it tricky or too resource intensive, an alternative would be to just keep the last X number of lines and move the rest. I was originally thinking of something like tail -n 10000 > newfile.txt but that would mean moving the "good" logs to a new file and then doing a name swap... and then I'd still need to remove the "good" entries from the archive. This particular log file is pretty large (1 GB) so I'd prefer the task to be as resource and time efficient as possible. The extra pipes in the log concern me and I'm not sure if I'd need extra protection in the commands to avoid that from causing problems.

    Read the article

  • How to get the pid of a running process using a single command that parse the output of ps?

    - by Sorin Sbarnea
    I am looking for a single line that does return the pid of a running process. Currently I have: ps -A -o pid,cmd|grep xxx|head -n 1 And this returns the fist pid, command. I need only the first number from the output and ignore the rest. I suppose sed or awk would help here but my experience with them is limited. Also, this has another problem, it will return the pid of grep if the xxx is not running. It's really important to have a single line, as I want to reuse the output for doing something else, like killing that process.

    Read the article

  • Where to put unix sockets

    - by James Willson
    I am new to this, so sorry if its obvious. I am running a debian server and installing the likes of UWSGI, NGinx etc on there. The configurations keep talking about pointing to "sockets". In the build options I seem to be able to specify where the sockets for each program go. By default it looks like most of them go in /tmp/ (not all of them). Is this a good place for them to go? Im trying to keep things as organised as possible but just bunging them in my tmp directory doesnt seem like the best option.

    Read the article

  • Preserve embedded album art when converting from .flac to .ogg

    - by Profpatsch
    I want to convert my archived .flac library to .ogg for daily use. Using find ./ -iname '*.flac' -print0 | xargs -0 -n1 oggenc -q6 on the root music folder and then deleting every .flac (having copies of them in archive) seems straight forward, after trying it with one file it worked and all of the tags were transfered, too, except for one: Embedded album art! I always prefer emedded covers over folder images, since I have some albums with varying covers. One possible solution is discussed here, but the script only works if the image is already extracted: Embed album art in OGG through command line in linux One possible solution I thought about was extracting album art from every song (not every song has one, though, and some even 2 or 3!), temporarily saving it and then using the script to include it into the finished .ogg. But then I want to increase the number of processes xargs runs simultaniously to save time, so the temp images need to have a distinct name. Is there a (linux) program that knows how to handle this? Or is there a finished script floating around somewhere? It would be nice if oggenc supported adding embedded coverart and it really is a shame, since these two formats should (in theory) share the same tag format. Edit: 15 days and noone even tries to answer. It’s funny, most of my questions don’t get answered. Too hard? Wrong SE site?

    Read the article

  • Passing multiple sets of arguments to a command

    - by Alec
    instances contains several whitespace separated strings, as does snapshots. I want to run the command below, with each instance-snapshot pair. ec2-attach-volume --instance $instances --device /dev/sdf $snapshots For example, if instances contains A B C, and snapshots contains 1 2 3, I want the command to be called like so: ec2-attach-volume -C cert.pem -K pk.pem --instance A --device /dev/sdf 1 ec2-attach-volume -C cert.pem -K pk.pem --instance B --device /dev/sdf 2 ec2-attach-volume -C cert.pem -K pk.pem --instance C --device /dev/sdf 3 I can do either one or the other with xargs -n 1, but how do I do both?

    Read the article

  • SAFE MODE Restriction in effect. The script not allowed to access directory owned by uid

    - by user57221
    I am running a dedicated server with multiple websites. I have created a global directory for common scripts for all websites, rather than repeating them in every website directory. How can I make this global directory accessible for all website. I am getting following error. Warning: require_once() [function.require-once]: SAFE MODE Restriction in effect. The script whose uid is XXXX is not allowed to access /vhosts/globallibrary/Zend/Application.php owned by uid XXXX I have change the ownership of global directory for X website. so it works fine for X website. latter I added another website Y Now I am getting the same error again. If I change the CHOWN for Y website then X website will have the same error. I don't want to disable the safemode restriction. Is there a work around, so that this global dir will be accessible by all website. I am getting following error in my browser when I try to access global directory. Global directory is on same level as all other websites. Is this a good practice to enable safemode for websites?

    Read the article

  • Add entire 300 GB filesystem to Git Annex repository?

    - by Ryan Lester
    By default, I get an error that I have too many open files from the process. If I lift the limit manually, I get an error that I'm out of memory. For whatever reason, it seems that Git Annex in its current state is not optimised for this sort of task (adding thousands of files to a repository at once). As a possible solution, my next thought was to do something like: cd / find . -type d | git annex add --$NONRECURSIVELY find . -type f | git annex add # Need to add parent directories of each file first or adding files fails The problem with this solution is that there doesn't seem from the documentation to be a way to non-recursively add a directory in Git Annex. Is there something I'm missing or a workaround for this? If my proposed solution is a dead end, are there other ways that people have solved this problem?

    Read the article

  • How do I read multiple lines from STDIN into a variable?

    - by The Wicked Flea
    I've been googling this question to no avail. I'm automating a build process here at work, and all I'm trying to do is get version numbers and a tiny description of the build which may be multi-line. The system this runs on is OSX 10.6.8. I've seen everything from using CAT to processing each line as necessary. I can't figure out what I should use and why. Attempts read -d '' versionNotes Results in garbled input if the user has to use the backspace key. Also there's no good way to terminate the input as ^D doesn't terminate and ^C just exits the process. read -d 'END' versionNotes Works... but still garbles the input if the backspace key is needed. while read versionNotes do echo " $versionNotes" >> "source/application.yml" done Doesn't properly end the input (because I'm too late to look up matching against an empty string).

    Read the article

  • Use msysgit/"Git for Windows" to navigate Windows shortcuts?

    - by Darthfett
    I use msysgit on Windows to use git, but I often want to navigate through a Windows-style *.lnk shortcut. I typically manage my file structure through Windows' explorer, so using a different type of shortcut (such as creating hard or soft link in git) isn't feasible. How would I navigate through this type of shortcut? For example: PCUser@PCName ~ $ cd Desktop PCUser@PCName ~/Desktop $ ls Scripts.lnk PCUser@PCName ~/Desktop $ cd Scripts.lnk sh.exe": cd: Scripts.lnk: Not a directory Is it possible to change this behavior, so that instead of getting an error, it just goes to the location of the directory? Alternatively, is there a command to get the path in a *.lnk file?

    Read the article

  • How to remove a tagged block of text in a file?

    - by EmpireJones
    How can I remove all instances of tagged blocks of text in a file with sed, grep, or another program? If I have a file which contains: random text // START TEXT internal text // END TEXT more random // START TEXT asdf // END TEXT text how can I remove all blocks of text within the start/end lines, produce the following? random text more random text

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • Is there a unix command to output time elapsed during a command?

    - by Olivier Lacan
    I love using time to find out how long a command took to execute but when dealing with commands that execute sub-commands internally (and provide output that allows you to tell when each of those sub-commands start running) it would be really great to be able to tell after what number of seconds (or milliseconds) a specific sub-command started running. When I say sub-command, really the only way to distinguish these from the outside is anything printed to standard out. Really this seems like it should be an option to time.

    Read the article

  • Avoiding users to corrupt and use a script

    - by EverythingRightPlace
    Is it possible to deny the right to copy files? I have a script which should be executable by others. They are also allowed to read the file (though it would not be a problem to forbid reading). But I don't want the script to be changed and executed. It's not a problem to set those permissions, but one could easily copy, change and run the script. Can this even be avoided? /edit The OS is Red Hat Enterprise Linux Workstation release 6.2 (Santiago).

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >