Search Results

Search found 47318 results on 1893 pages for 'html script'.

Page 145/1893 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Logon script in Active Directory

    - by tareq838
    I am having a weird intermittent issue for only some users. I have a logon script that maps shared drives and issues a diisclaimer everytime the user logs on to a machine. The problem lately is that the logon script will not run for the user so we get a help desk call. When one of the help desk techs log on to the machine the script then comes up. The tech then logs off and the user will log back in and they will get the logon script. I am at wits end with this issue. Any help would be appreciated. It has happend on both Windows XP and Vista 64.

    Read the article

  • Strange strace and setuid behaviour: permission denied under strace, but not running normally.

    - by Autopulated
    This is related to this question. I have a script (fix-permissions.sh) that fixes some file permissions: #! /bin/bash sudo chown -R person:group /path/ sudo chmod -R g+rw /path/ And a small c program to run this, which is setuided: #include "sys/types.h" #include "unistd.h" int main(){ setuid(geteuid()); return system("/path/fix-permissions.sh"); } Directory: -rwsr-xr-x 1 root root 7228 Feb 19 17:33 fix-permissions -rwx--x--x 1 root root 112 Feb 19 13:38 fix-permissions.sh If I do this, everything seems fine, and the permissions do get correctly fixed: james $ sudo su someone-else someone-else $ ./fix-permissions but if I use strace, I get: someone-else $ strace ./fix-permissions /bin/bash: /path/fix-permissions.sh: Permission denied It's interesting to note that I get the same permission denied error with an identical setup (permissions, c program), but a different script, even when not using strace. Is this some kind of heureustic magic behaviour in setuid that I'm uncovering? How should I figure out what's going on? System is Ubuntu 10.04.2 LTS, Linux 2.6.32.26-kvm-i386-20101122 #1 SMP

    Read the article

  • Bash script dosn't open in terminal on reboot

    - by twigg
    Quick overview, I have created a script that reboots the laptop after x amount of time and x amount of cycles. I have added the script to the start-up applications and the script does seem to be running in the background but never opens a terminal Window. Am I missing something? Adding Code (this is saved in a file called countdown.sh) #!/bin/bash # check if passed.txt exists if it does, send to soak test if [ -f passed.txt ]; then echo reboot has passed $nol cycles sleep 5; echo Starting soak tests sleep 5; rm testlog.txt; rm passed.txt; phoronix-test-suite run quick-test exit 0; fi # check if file testlog.txt exists if not create it if [ ! -f testlog.txt ]; then echo >> testlog.txt; fi # read reboot file to see how many loops have been completed exec < testlog.txt nol=0 while read line do nol=`expr $nol + 1` done # start the countdown, x is time limit let x=10; while [ $x -gt 0 ]; do clear; figlet "Rebooting in..."; figlet $x; let x-=1; sleep 1; done; echo reboot success $nol >> testlog.txt; shutdown -r now; # set how many times the script should shutdown the laptop reboot_count=1 # if number of reboots matches nol's then stop the script # create a new text file called passed.txt if [ "$nol" == "$reboot_count" ]; then echo reboot passed $nol cycles >> passed.txt; fi

    Read the article

  • Writing a powershell script to copy files with certain extension from one folder to another

    - by the_drow
    I would like to write a powershell script that gets the following parameters as input: Folder to copy from, extensions allows, folder to copy to and a boolean indicating if the change should restart IIS, username and password. What cmdlets should I be looking at considering that I am copying to a remote server? How do I read the parameters into variables? How do I restart IIS? Cosidering that I might want to copy multiple folders, how do I write a powershell script that invokes a powershell script?

    Read the article

  • Need help with custom init script

    - by churnd
    I'm trying to set up an init script for a process on redhat linux: #!/bin/sh # # Startup script for Conquest # # chkconfig: 345 85 15 - start or stop process definition within the boot process # description: Conquest DICOM Server # processname: conquest # pidfile: /var/run/conquest.pid # Source function library. This creates the operating environment for the process to be started . /etc/rc.d/init.d/functions CONQ_DIR=/usr/local/conquest case "$1" in start) echo -n "Starting Conquest DICOM server: " cd $CONQ_DIR && daemon --user mruser ./dgate -v - Starts only one process of a given name. echo touch /var/lock/subsys/conquest ;; stop) echo -n "Shutting down Conquest DICOM server: " killproc conquest echo rm -f /var/lock/subsys/conquest rm -f /var/run/conquest.pid - Only if process generates this file ;; status) status conquest ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading process-name: " killproc conquest -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 However, the cd $CONQ_DIR is getting ignored, because the script errors out: # ./conquest start Starting Conquest DICOM server: -bash: ./dgate: No such file or directory [FAILED] For some reason, I have to run dgate as ./dgate. I cannot specify the full path /usr/local/conquest/dgate The software came with an init script for a Debian system, so the script uses start-stop-daemon, with the option --chdir to where dgate is, but I haven't found a way to do this with the Redhat daemon function.

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • Help me exorcise my demon possessed logon script

    - by Detritus Maximus
    I have a user logon script that copies a file over to a subfolder of the current user's profile path: Script (only showing the line that isn't working): copy /Y c:\records\javasettings_Windows_x86.xml "%USERPROFILE%\Application Data\OpenOffice.org\3\user\config">>c:\records\OOo3%USERNAME%.txt 2>&1 To diagnose why it wasn't working, I did a somelogfile.log parameter on the group policy script and found that what the above command is translating to is this: C:\WINDOWS>copy /Y c:\records\javasettings_Windows_x86.xml "C:\Documents and Settings\test2\Application Data\OpenOffice.org\3\user\config" 1>>c:\records\OOo3test2.txt 2>&1 So the question is, how do I get rid of (exorcise) the " 1" in that line? Update 1: So the reason the script wasn't working was that the creator didn't have any permissions on the directory. I fixed the permissions, and now the file works but! I still have the " 1" showing on all the logs and would like to know why.

    Read the article

  • Wakeup on LAN script works from Mac, but not Windows 7

    - by illyich
    I have a Linux server on my local network that is set up to use wakeup on lan. I copied this script verbatim, just replacing the MAC address in the example use. When I run this script on a Mac, the server wakes up. When I run it from Windows 7 (32-bit Ultimate) it doesn't do anything (note that the script DOES run, I added a debug raw_input() to confirm).

    Read the article

  • Mount a share on a Mac using a login hook

    - by Arcath
    I have a script that mounts a Samba share to a folder on the desktop, it runs no problem but when its setup as a LoginHook it doesn't mount the folder. Does anyone have a working login hook that mounts a share that they can post? Or know any issues with mounting shares during login? This is my Script: #!/usr/bin/env ruby @domain="Lancaster" @user=ARGV[0] #@[email protected](/\n/,"") @userfolder="/Users/" + @user.to_s @smbshare="//#{@user}@hercules/everyone" system("mkdir #{@userfolder}/Desktop/everyone") system("mount_smbfs #{@smbshare} #{@userfolder}/Desktop/everyone | #{@userfolde$ system(" /usr/bin/osascript <<-EOF tell application \"System Events\" activate display dialog \"Welcome to the #{@domain} domain #{@user}\n\nY$ end tell EOF ")

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Run a script after killing lxsession (xorg)

    - by user284194
    I am trying to run a program automatically within a bash script after killing the LXDE session. My script consists of: #!/bin/sh pkill lxsession; sh /home/pi/RetroPie/EmulationStation/emulationstation My aim is to log out of the LXDE session and run EmulationStation on my Raspberry Pi with a bash script. I'm using pkill lxsession; to bypass lxsession's logout confirmation dialog. As it stands, this script just gets me to the command line from a working LXDE desktop. Thanks for reading.

    Read the article

  • Postfix "mail-to-script" pipe only delivers empty messages

    - by user68202
    i have a problem here. I want that a incoming email is piped to a php script in the system through postfix. My System is running with ispconfig 3, postfix and dovecot (< virtual mailbox users are saved in mysql). I looked already into this one: How to configure postfix to pipe all incoming email to a script? ... the script is executed, but no "message" is delivered to the script. My setup so far: In ISPConfig 3 i have set up the following email route: Active Server Domain Transport Sort by Yes example.com pipe.example.com piper: 5 excerpt from my postfix master.cf: piper unix - n n - - pipe user=piper:piper directory=/home/piper argv=php -q /home/piper/mail.php so far it is working great (mail sent to [email protected]) (mail.log): Jun 21 16:07:11 example postfix/pipe[10948]: 235CF7613E2: to=<[email protected]>, relay=piper, delay=0.04, delays=0.01/0.01/0/0.02, dsn=2.0.0, status=sent (delivered via piper service) ... and no errors in mail.err the mail.php is sucessfully executed (its chmod 777 and chown'ed to piper), but creates a empty .txt file (normally it should contain the email message): -rw------- 1 piper piper 0 Jun 21 16:07 mailtext_1340287631.txt the mail.php script ive used, is the one from http://www.email2php.com/HowItWorks if i use their (commercial) service to pipe an email to the mail.php (in a apache2 environment) through a provided "pipe-email", the message is saved sucessfully and complete. But as you can see, i dont want to use external services. -rw-r--r-- 1 web2 client0 1959 Jun 21 16:19 mailtext_1340288377.txt So, whats wrong here? I think it has something to do with the "delivering configuration" in my system...

    Read the article

  • date and other commands no longer working in sh script

    - by williamsdb
    I have a shell script that used to run find on Ubuntu 10.04 but since I have moved to 12.04 it doesn't work as before throwing the following messages: /home/checks.sh: 1: /home/checks.sh: date : not found find: invalid mode `0777\r' the script is as follows: date echo "" echo "Files changed in the last 24 hours" echo "==================================" find /var/www -mtime -1 | grep -iv '.log' echo "" echo "" echo "Files with permissions set to 777" echo "=================================" find /var/www -perm 0777 all lines work from the command line but not in the shell script any more. Can't find anything in the manual to suggest why.

    Read the article

  • Proper upstart script for hamachi?

    - by ALQ
    I've been looking for a script to supervise hamachi and mostly got it to work except for the part that daemonizes hamachid. The following script works but is not perfect. I'm not familiar with upstart internals to debug this further. description "Hamachi VPN" author "Alexis Le-Quoc <[email protected]>" start on (net-device-up and local-filesystems and runlevel [2345]) stop on runlevel [016] respawn oom never env DAEMON=/opt/logmein-hamachi/bin/hamachid pre-start script [ -x "$DAEMON" ] end script # should really be: # expect daemon # exec $DAEMON exec $DAEMON debug > /dev/null

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • Dump Microsoft SQL Server database to an SQL script

    - by Matt Sheppard
    Is there any way to export a Microsoft SQL Server database to an sql script? I'm looking for something which behaves similarly to mysqldump, taking a database name, and producing a single script which will recreate all the tables, stored procedures, reinsert all the data etc. I've seen http://vyaskn.tripod.com/code.htm#inserts, but I ideally want something to recreate everything (not just the data) which works in a single step to produce the final script.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • emacs ORG-mode "headless" export-as commands?

    - by Seamus
    When I use org-export-as-latex or org-export-as-html orgmode turns my buffer into a .tex file or .html file. But I don't want all the extra junk that it adds to the file: I want to handle the documentclass and everything myself and just \input the org mode generated file. (Or the analogous things for html with php). So if my org file just has: * Section - Stuff - Things I want the org mode command to output just \section{Section} \begin{itemize} \item Stuff \item Things \end{itemize} Without any of the extra \tableofcontents junk that ORG adds to it. I know I could define my own kind of #+LaTeX_CLASS that could add the packages I want and so on, but I don't want to do things that way (and that wouldn't remove the \maketitle or the spurious \vspace* that ORG insists on inserting. Is there a command to do this "headless" parsing and converting? I had a look but it's not obvious from the documentation. Presumably some low level ORG command is doing the parsing and converting I want, but I couldn't find what it was called from looking at the docs and C-h pages... This is not a question about HTML or LaTeX but about emacs ORG mode. So don't kick it off to some other site...

    Read the article

  • Website loading until initial script finishes

    - by wardy277
    Hi, i have a highly used server (running plesk). I have some long scripts that take a while to process (huge mysql database). I have found then in 1 browser, i run the script and while it is loading i cannot view any other parts of the site until the script finishes, it seems that all the requests go off, but they don't get served until the initial script finishes. i thought this may be a server wide issue, but it is not. If i use another computer i can view the site fine, even on the same computer with a different browser i can navigate fine, while the script still loads. I think it much limit the number of requests per session. Is this correct? is there any way to configure this to allow for 2-3 other requests per session? It is really bad that when i am on the phone to a client, i have just run a long report, but cannot use the site or follow what they are saying until the page has loaded? Chris

    Read the article

  • is there a way to automate changing filenames in <link> , <script> tags

    - by nepsdotin
    when we use Expires header for text files like js, css, contents are cached in the browser, to get new content we need to change in the html file the new names in the link and script tag. When we add changes. How can we automate it. I may have some bunch of html files in multiple folders also in subdirectories. There would be a text file filelist.txt OldName NewName oldfile1-ver-1.0.js oldfile1-ver-2.0.js oldfile2-ver-1.0.js oldfile2-ver-2.0.js oldfile3-ver-1.0.js oldfile3-ver-2.0.js oldfile4-ver-1.0.js oldfile4-ver-2.0.js The script should change all the oldfile1-ver-1.0.js into oldfile1-ver-2.0.js in the html, php files I would run this script before i start uploading. Finally the script could create a list of files and line number where it made the update. The solution can be in PERL/PHP/BATCH or anything thats nice and elegant

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Openbsd init script for ssh VPN tunnel

    - by manthis
    I have a server hosting SSH tunnels and Openbsd 4.5 clients connecting to it. Things work just fine but I am in the need of automating the connection from the client to the server. So that if the client is accidentally rebooted, then the connection initiates unattended. So it should be as straight forward as to include the ssh connection in an init script. However I have miserably failed to do so by including it to /etc/rc.local, which is the file I usually do this sort of things in. Right now I am using autossh to also restart the connection if necessary and the script that I put on /etc/rc.local follows: #!/bin/sh # # Example script to start up tunnel with autossh. # # This script will tunnel 2200 from the remote host # to 22 on the local host. On remote host do: # ssh -p 2200 localhost # # $Id: autossh.host,v 1.6 2004/01/24 05:53:09 harding Exp $ # ID=root HOST=example.com #AUTOSSH_POLL=600 #AUTOSSH_PORT=20000 #AUTOSSH_GATETIME=30 #AUTOSSH_LOGFILE=$HOST.log #AUTOSSH_DEBUG=yes #AUTOSSH_PATH=/usr/local/bin/ssh export AUTOSSH_POLL AUTOSSH_LOGFILE AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -f -M 20000 ${ID}@${HOST} The script detaches just fine when run manually so I just include it on /etc/rc.local as echo -n 'starting local daemons:' if [ -x /usr/local/sbin/autossh.sh ]; then echo -n 'ssh tunnel' /usr/local/sbin/autossh.sh fi echo '.' I have also tried calling it from /etc/hostname.tun0 in case there may be issues with /etc/rc.local not being called at the right time when network connections are ready, so I would use: inet 10.254.254.2 255.255.255.252 10.254.254.1 !/usr/local/sbin/autossh.sh Your input is highly appreciated.

    Read the article

  • Can't start a service (sudo) remotely from script and keep it running

    - by Greg Bernhardt
    I have a service (tomcat) that needs sudo to be started. I made a simple script on the remote server in /root/bin/test.sh #!/bin/sh sudo service tomcat start read (The script needs to do other stuff too, just pared down for simplicity). When I run a it directly on the remote server, tomcat starts and continues running on the server after I disconnect. When I run it remotely, the process starts, (I can see it when paused for the "read"), but once the script ends, it's gone. (while paused for the read, run this command locally) ps -ef | grep tomcat I've tried various combinations of nohup, screen, and & on the commands both on the local machine and in the remote machine's test.sh script, but I can't seem to get it working. ssh -t [email protected] "/root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh" ssh -t [email protected] "nohup /root/bin/test.sh &" ssh -t [email protected] "screen /root/bin/test.sh &"

    Read the article

  • Grep all files in a directory and print matches with file name

    - by javanix
    I have a list of log files that I create as part of a video encoding script that I wrote. I would like to search all of them and print out certain statistics from the encode - how fast they were encoded, what settings were used, etc. I can search for the average framerate in one file via this 1 liner: cat ${filename} | grep average which outputs: work: average encoding speed for job is 23.211176 fps and search for the ratefactor: cat ${filename} | grep RF I would like to search all files in the directory and print off one, or prefereably both pieces of information along with the filename. Is there any way I can use find or grep to get this in a one-liner, or do I need to write a script? I would like output like this: /home/javanix/filename.log <RF line> <average line> I would like this to either work using FreeBSD 9 or Ubuntu 12.04.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >