Search Results

Search found 48823 results on 1953 pages for 'run loop'.

Page 433/1953 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • dhcrelay running as both DHCP and DHCPv6 relay agent on CentOS 6.2

    - by Tibor
    I am trying to set up a DHCP relay agent that would relay DHCP requests for both IPv4 and IPv6. I am using CentOS 6.2 and I am using the dhcrelay from the ISC DHCP implementation. I would like to set it up as a service, but the man page for dhcrelay states: -6 Run dhcrelay as a DHCPv6 relay agent. Incompatible with the -4 option. -4 Run dhcrelay as a DHCPv4/BOOTP relay agent. This is the default mode of operation, so the argu- ment is not necessary, but may be specified for clarity. Incompatible with -6. It seems that the -6 and -4 options are incompatible. How would I still make it work for both protocols without rolling my own service wrapper for both cases?

    Read the article

  • Running PHP scripts as the owner of the PHP file: security issues

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web user can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Fabric and cygwin don't work with windows UNC paths

    - by tcoopman
    I have some strange problems with fabric deployment to Windows Server 2008r2. The thing I try to accomplish is to copy some files to a shared folder with a fabric script (this script does a lot of other things too, but only this step gives me problems). This is the problem: When I try to access a UNC(Universal Naming convention) path I always get access denied kind of answers if I run the script in fabric. When I run the command in an ssh prompt (same user) it works fine. Examples: cmd: robocopy f:/.... //share result: in ssh this works fine, in fabric I get "Logon failure: the user has not been granted the requested logon type aat this computer." cmd: cd //share result: in ssh this works fine, in fabric I get "//share: Not a directory" Further information: uname -a and whoami return exact the same thing in fabric and ssh. I also tried things like mount, net use, but these commands all have kind of the same problem.

    Read the article

  • django fcgi - call a management command with subprocess.Popen

    - by user41855
    Hi, I'm using an app called django-chronograph. It has a code of line which works in my dev environment and does not work in production: p = subprocess.Popen(['python', get_manage_py(), 'run_job', str(self.pk)]) This line crashes in production with: unknown command run_job Whereas when I run directly from command line: manage.py run_job It works fine. Interestingly it worked once when we exchanged 'python' with 'usr/bin/python'. then we restarted the server once more and it was back to old behaviour. Thus it seems as we have a python path issue. I'm not the guy who is running the server, its my app that should run and it would be great to get some help here. Attention: I'm a total noob regarding server-administration.. server environment: NGINX with FCGI-Daemon FCGI in prefork-mode

    Read the article

  • Redirect output of Python program to /dev/null

    - by STM
    I have a Python executable, written and compiled by somebody else, that I simply need to run once halfway down my own bash script. The program uses a text-based UI, therefore waits for input before proceeding, but the key operations it performs when starting are required in my bash script. A messy (and strange) procedure I know, but unfortunately I haven't got any other options. I've gotten around forcefully closing the program with a kill signal, but the program's TUI insists on outputting to wherever it's run. I've tried redirecting both stdout and stderr to /dev/null and running the program in the background by suffixing an ampersand, but simply can't get it to play ball. I believe the cause is the program spawns other processes, and the output redirection of the parent process doesn't affect them. Is there any trick I can utilise to redirect all output from child processes too?

    Read the article

  • sysprep'd Server 2008 R2 VMware autocheck BSODs

    - by slag
    I'm trying to set up a template Server 2k8 R2 (Standard) VM in our VMware ESXi environment, we'll need to deploy a whole lot of them and I want to (a) streamline and (b) standardise the server build process. Originally I built a sysprep answer file using WAIK and the Windows System Image Manager - once I'd run sysprep using this, restarting the machine resulted in a "autochk.exe not found - skipping autocheck" message, followed almost immediately by a BSOD. Rebooting just resulted in the same thing. I first assumed it was my answer file, but have now checked this on a physical machine and it works fine. I've reinstalled the OS on the machine (and NOTHING else), and run sysprep.exe (ticking Generalize) - this produces exactly the same result. I've also attempted to use the Customisation Wizard built in to VMware - again, the same autocheck message and BSOD is the result. Any ideas? Let me know any further information I need to provide...

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • Is there a way to make AutoGK to resize the video?

    - by Jian Lin
    AutoGK is so far a good tool for converting video clips in DVD-R disc into .avi to post to YouTube. But I captured some Wii game play, and it is 16 : 9 ratio, but AutoGK only output the .avi in 4 : 3 ratio. Is there actually a way to tweak it to output the avi as 16 : 9? Otherwise, I need to run VirtualDub and download DivX codec to run it again to resize it 16 : 9. Any way this is done, or any other recommendations? thanks for helping.

    Read the article

  • Vim lint check - only show message if there's an error

    - by GorillaSandwich
    I have this line in my .vimrc, which means "when I save a .rb file, run it through ruby -c" (the ruby interpreter's error checking). autocmd BufWritePost *.rb !ruby -c <afile> When I save that file, I always see output at the bottom of the screen, so I get used to it and start ignoring it. What I want is to only see output if there are errors. I can see that when there are errors, after it says what they are, at the bottom, it says "shell returned 1." How can I modify this line so that it only shows a message if the shell returns 1? Is there a way to conditionally surpress output from a shell command run in vim?

    Read the article

  • Adding multiple Exchange accounts under one profile in Outlook 2010 or 2013

    - by Karel Smutný
    I work for two companies, each having own Exchange Server. I want to configure my Outlook for both email accounts. I am not able to add both accounts under the same profile in Outlook 2010, nor 2013. I found some how-to articles, each involving Office Configuration Tool. However, my installation does not support this tool. I have Office 2010 Home and Business, and Office 2013 Preview click-to-run. Is there another way? P.S. Having two different profiles, one for each Exchange account, is inconvenient. I cannot run two instances of Outlook at the same time and switching all the time is tedious.

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

  • Nginx/FPM/PHP all php files say 'File not found.'

    - by Boon
    i just installed nginx 1.1.13 and php 5.4.0 on a centos 5.8 final 64bit machine. Nginx and PHP/Fpm are running, and I can run php scripts via ssh command line, but in the browser I keep getting 'File not found.' errors on all my PHP files. This is how I have my nginx.conf handle PHP scripts: location ~ \.php$ { root /opt/nginx/html; fastcgi_pass unix:/tmp/fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /opt/nginx/html$fastcgi_script_name; include fastcgi_params; } This is a direct copy/paste from my other servers, where it works fine with this setup (but they run older versions of php/fpm). Why am I getting those errors?

    Read the article

  • JBoss thrown error looking up local address when starting

    - by Jeeba
    Hello im installing a fresh Jboss 5.1 in a Centos 5.5 machine. I dont have Apache installed. So when I try to start jboss using the comand ./run.sh I get the following error 15:13:57,414 INFO [JMXKernel] Legacy JMX core initialized 15:14:03,856 ERROR [ServerInfo] Error looking up local address java.net.UnknownHostException: dhcppc1: dhcppc1 at java.net.InetAddress.getLocalHost(InetAddress.java:1354) at org.jboss.system.server.ServerInfo.getHostAddress(ServerInfo.java:364) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) .... After that i can run Jboss only from 127.0.0.1:8080 but using localhost:8080 doesnt work. I think its a centos configuration problem but im a total newbye managing ports and maybe firewalls, so what do you think could be the problem?

    Read the article

  • Security issues of running PHP scripts as the owner of the PHP file with suexec

    - by thomasrutter
    I'm using suexec to ensure that PHP scripts (and other CGI/FastCGI apps) are run as the account holder associated with the relevant virtual host. This allows for securing each users' scripts from reading/writing by other users. However, it occurs to me that this opens up a different security hole. Previously, the web server ran as an unprivileged user, with read-only access to user's files (unless the user changed the file permissions for some reason). Now, the web server can also write to user's files. So while I've prevented different users taking advantage of each other's scripts, I've made it so that in the event that some application has a remote code injection vulnerability, it now has not only read access but also write access to all that user's scripts and website. How can I deal with this? One idea I've had is to create a second user account for each user account in the system, so that each user has their own user account, and all their scripts are run under another user account. But that seems cumbersome.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • How can I start nginx via upstart ?

    - by Chiggsy
    Background: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04 LTS" I've built nginx, and I'd like to use upstart to start it: nginx upstart script from the site: description "nginx http daemon" start on runlevel 2 stop on runlevel 0 stop on runlevel 1 stop on runlevel 6 console owner exec /usr/sbin/nginx -c /etc/nginx/nginx.conf -g "daemon off;" respawn I get "unknown job" when i try to use initctl to run it, which I just learned apparently means there is an error, ( what's wrong with "Error" to describe errors?) Can someone point me in the right direction ? I've read the documentation , as it is, and it seems kind of sparse for a SysV init replacement... but whatever just need to add this job to the list, run it, and get on with what's left of my life... Any tips? EDIT: initctl version init (upstart 0.6.5)

    Read the article

  • Why no multiple instances of Firefox on Linux as on Windows?

    - by Jack
    On Windows If I run Firefox as user jack, and then try to start another instance of firefox I will be unable to, as one is already running. If I choose to run firefox as administrator, then I can have two instances of firefox, separate from each other side by side, because they are under different user accounts. This does not seem to be true on Linux. As user jack if I start firefox, like on windows I am unable to start a new instance. If I open a terminal and change to root, set XAUTHORITY to jacks .Xauthority and try to start firefox as root....I get the error that firefox is already running. Why is this? Please don't spare any technical details in your answers....thankyou.

    Read the article

  • Finding text in Ubuntu (gnome) terminal output

    - by Rickson
    Imagine this scenario: You run a command at gnome terminal. This command has made a bunch of outputs to the terminal. After some time, you realize you need the value of a variable (let's say variable_needed) that was printed by the command somewhere in the terminal. How to find it? KDE terminal used to have a shortcut ctrl+shift+f which searched the terminal output. It seems that gnome-terminal doesn't have it (at least at Ubuntu 10.04.2 LTS). Is there any way of adding it? Is there any other good terminal I could use that has it? Notice that the output has already been written so I don't want (cannot) run the command again combined with grep, |, , vim, emacs, etc.

    Read the article

  • AWS EC2 - How to specify an IAM role for an instance being launched via awscli

    - by Skaperen
    I am using the "aws ec2 run-instances" command (from the awscli package) to launch an instance in AWS EC2. I want to set an IAM role on the instance I am launching. The IAM role is configured and I can use it successfully when launching an instance from the AWS web UI. But when I try to do this using that command, and the "--iam-instance-profile" option, it failed. Doing "aws ec2 run-instances help" shows Arn= and Name= subfields for the value. When I try to look up the Arn using "aws iam list-instance-profiles" it gives this error message: A client error (AccessDenied) occurred: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/shell/i-15c2766d is not authorized to perform: iam:ListInstanceProfiles on resource: arn:aws:iam::xxxxxxxxxxxx:instance-profile/ (where xxxxxxxxxxxx is my AWS 12-digit account number) I looked up the Arn string via the web UI and used that via "--iam-instance-profile Arn=arn:aws:iam::xxxxxxxxxxxx:instance-profile/shell" on the run-instances command, and that failed with: A client error (UnauthorizedOperation) occurred: You are not authorized to perform this operation. If I leave off the "--iam-instance-profile" option entirely, the instance will launch but it will not have the IAM role setting I need. So the permission seems to have something to do with using "--iam-instance-profile" or accessing IAM data. I repeated several times in case of AWS glitches (they happen sometimes) and no success. I suspected that perhaps there is a restriction that an instance with an IAM role is not allowed to launch an instance with a more powerful IAM role. But in this case, the instance I am doing the command in has the same IAM role that I am trying to use. named "shell" (though I also tried using another one, no luck). Is setting an IAM role not even permitted from an instance (via its IAM role credentials)? Is there some higher IAM role permission needed to use IAM roles, than is needed for just launching a plain instance? Is "--iam-instance-profile" the appropriate way to specify an IAM role? Do I need to use a subset of the Arn string, or format it in some other way? Is it possible to set up an IAM role that can do any IAM role accesses (maybe a "Super Root IAM" ... making up this name)? FYI, everything involves Linux running on the instances. Also, I am running all this from an instance because I could not get these tools installed on my desktop. That and I do not want to put my IAM user credentials on any AWS storage as advised by AWS here. after answered: I did not mention the launching instance permission of "PowerUserAccess" (vs. "AdministratorAccess") because I did not realize additional access was needed at the time the question was asked. I assumed that the IAM role was "information" attached to the launch. But it really is more than that. It is a granting of permission.

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • WEB based HPC cluster node management

    - by Skuja
    Hello, i am working on my school diploma thesis. The main goal is to create web based application where logged users could see free and busy nodes, turn them on and off, see what process they are running etc. Figured out that i could do something like this - write some cron daemon that would run every 30seconds or so, and it could run ping utility for each node to find out if it is on or off, then write results to some file. Then from my web app (i will write in PHP) i could read the info. Will it be a good solution? How would you suggest me to do it? And finally, is there any existing solutions (it may not be a definetly ewb based) for managment of cluster nodes?

    Read the article

  • sudo taking long time

    - by Sam
    On a Ubuntu 9 64bit Linux machine, sudo takes longer time to start. "sudo echo hi" takes 2-3 minutes. strace on sudo tells poll("/etc/pam.d/system-auth", POLLIN) timesout after 5 seconds and there are multiple calls(may be a loop) to same system call (which causes 2-3min delay). Any idea why sudo has to wait for /etc/pam.d/system-auth? Any tunable to make sudo to timeout faster? Thanks Samuel

    Read the article

  • How do I configure Ruby On Rails on windows XP with APACHE and MYSQL

    - by Gaurav Sharma
    Hello Everyone, It has been quite some time I am struggling to get Ruby On Rails working on my System which is having Windows XP operating system. I am trying to configure ROR to use apache and mysql so that I do not have to install additional servers to run ruby on rails. I also tried InstantRails but faced same problems. I went through the tutorial mentioned in getting rails to wrok on a windows machine running xampp and did all the steps which were necessary. All went fine (installing rails, running the ruby, gem and rails command from command prompt) but when I tried to run my application by typing localhost:3000/say/hello nothing happened and I was redirected to the google page for searching to this keyword. Please help me Thanks

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >