Search Results

Search found 7628 results on 306 pages for 'talha bin shakir'.

Page 177/306 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • How do I set up an sftp user to login with a password to an EC2 ubuntu server ?

    - by Doron
    Hello, I have an Ubuntu Server running on an EC2 instance. To login to that server I use a certificate file without any password. I've installed and configured vsftpd and created a user (let's call him "testuser") for which I've set a /bin/false ssh terminal so it will only be able to connect via sftp and upload/access files on his home directory. However - when I try to connect to the server from my computer, running sftp testuser@my-ec2-server I get Permission denied (publickey). Connection closed messages so I can't log in. How can I remove the certificate requirement for this user only (meaning, the "ubuntu" user will still have to use the certificate file to login via ssh), so normal sftp clients will be able to connect using a username and a password ? Thank you. PS Using Ubuntu Server 10.10 official AMI from canonical, 64bit on a micro instance.

    Read the article

  • Windows 7 admin denied access to taskmgr, system32 dir

    - by DotNet Zebra
    I have a Windows 7 (32-bit) box with 2 users, both admins (my wife and I are both developers). My admin account was created during Windows setup, hers was created later. Both accounts are in the same groups, yet we have VERY different permissions. In the beta and RC, both accounts worked identically (RC to RTM was a fresh install on this box, not an upgrade). I have a C:\bin folder with the sysinternals utilities and a bunch of other stuff. Running anything in there or in system32 just works on my account, on hers I get access denied errors (cannot access file or path). If I right click and try Run As Administrator, I still get the same thing!!!

    Read the article

  • Using both domain users and local users for Squid authentication?

    - by Massimo
    I'm working on a Squid proxy which needs to authenticate users against an Active Directory domain; this works fine, Samba was correctly set up and Squid authenticates users via ntlm_auth. Relevant lines in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 5 auth_param ntlm keep_alive on acl Authenticated proxy_auth REQUIRED http_access allow Authenticated http_access deny all Now, I need a way to allow access to users which don't have a domain account. I know I could create an "internet user" account in the domain, but this would allow access, although limited, to domain resources (file shares, etc.); I need something that will allow only Internet access. The ideal solution would be using a local account on the proxy server, either a Linux account or a Squid one; I know Squid supports this, but I'm unable to have it use both domain authentication and Squid/local authentication if domain auth is unsuccesful. Can this be done? How?

    Read the article

  • Why use Google Apps Sync for Outlook to sync email?

    - by Howiecamp
    I currently use Outlook 2007 against an Exchange server for my email and will be moving to Google Apps. There are a number of ways to import your existing email and calendar entries into Google Apps Gmail (e.g. including the Google Apps Sync for Outlook tool), the Google Email Uploader, and copying messages using an IMAP client) so I'm covered on the import side. I'm trying to understand the use cases for the Google Apps Sync for Outlook tool http://mail.google.com/support/bin/topic.py?topic=23333 with respect to email and calendar entries. The description says it syncs your Outlook email and calendar items with Google Apps, but doesn't using Outlook as an IMAP client against Google Apps do the same?

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • How to fix Windows 2008 R2 BOOTMGR is missing

    - by cyberkiwi
    BOOTMGR IS MISSING PRESS CTRL+ALT+DEL TO RESTART Note: This is a VM on VMWare ESX server, but that should not matter I put in the 2008 R2 x64 install dvd and can get to recovery, but it lists no Operating Systems. Clicking on Next brings me to +=========================== System Recovery Options +=========================== Choose a recovery tool Operating system: Unknown or (Unknown) Local Disk ..... Command Prompt I start the command prompt, go to C:\ and perform a dir /a Apart from files I put there myself, these are showing $Recycle.Bin Documents and Settings [C:\Users] Program Files Program Files (x86) ProgramData Recovery System Volume Information Temp Users Windows Where to go next? Is it like the NTLDR problem with Windows 2003 where I can just drop a file in there and it will be hunky dory again?

    Read the article

  • Cannot set target directory when extracting an archive using tar

    - by palto
    I'm trying to extract a tar archive to a specific directory. I've tried using -C flag but it doesn't work as expected. Here is the commandline I'm using tar xvf myarchive.tar -C mydirectory/ This gives me a following error: tar: file -C: not present in archive tar: file mydirectory/: not present in archive I've also tried setting the -C flag before the archive file but it just says this: tar xvf -C mydirectory/ myarchive.tar tar: -C: No such file or directory What am I doing wrong? EDIT: tar -tf shows that the tar archive does not have full path names: tar -tf myarchive.tar herareport/ herareport/bin/ ...

    Read the article

  • gnu coreutils split verbose flushed?

    - by 130490868091234
    When using the GNU coreutils split command with verbose mode, how can I make the lines that appear in the STDOUT be flushed with respect to the time when the file has finally been created? Fore example, running it like this: ~/coreutils/bin/split --verbose -d -u -l 10000000 1>out & tail -f out creating file `x00' creating file `x01' creating file `x02' [...] I would have expected the line creating file 'x00' to have appeared in file out after the file has been completely written, but instead, it seems like nothing is written into out until the whole file has been finally processed. Is there a way to change this behavior?

    Read the article

  • Link mysql with php on apache

    - by Cristy
    THE STORY: I've installed Apache 2.2.17 , PHP 5.2.16 , MySQL 5.5.8 . The PHP woks great, the phpinfo() works on the localhost. THE PROBLEM: The thing is that in phpinfo() there is no mysql mentionened. I've done the following: moved php.ini to C:\windows removed the ";" in front of the mysql extension line ( extension=php_mysql.dll ) copied the libmysql.dll to php folder & windows\system32 checked the extension path in php.ini to be " C:\php\ext" searched the internet for a solution for about 2 hours... ADDITIONAL INFO: In the Apache Enviorment path I have the following: C:\Program Files (x86)\PHP\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\QuickTime\QTSystem\;C:\OJI\MinGWStudio\work\mingw\bin;C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\DTS\Binn\; I think the Microsoft SQL path should be replaced with the MySQL one, but I don't know where to change that...

    Read the article

  • In Djano, why do I get a 500 server error when browsing, but "python mysite.fcgi" from SSH works fin

    - by Jim
    If I browse to my site, I get a 500 "internal server error." However, if I SSH into my server and go to my site's folder and run "python mysite.fcgi" I see the HTML rendered fine. Obviously, something is wrong, but I'm not sure what. Here is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteRule ^(media/.*)$ - [L] RewriteRule ^(static/.*)$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] Here is my mysite.fcgi file: #!/usr/bin/python2.5 import sys, os sys.path.insert(0, "/kunden/homepages/34/[mydir]/htdocs/projects/django") sys.path.insert(1, "/kunden/homepages/34/[mydir]/lib/python/site-packages") os.chdir("/kunden/homepages/34/[mydir]/htdocs/projects/django/mysite") os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' from django.core.servers.fastcgi import runfastcgi runfastcgi(["method=threaded", "daemonize=false"]) I'm setting this up on 1and1. It has been a pain, but I think I'm close.

    Read the article

  • WIndows Emacs Keybinding

    - by Josh
    I know this is not a Windows site, so my apologies. I use Ubuntu all day, every day, and have finally convinced my buddy to try it. He is on Windows 7, so we installed this: http://www.ourcomments.org/cgi-bin/emacsw32-dl-latest.pl . It seems to be working great, but when he hits C-p ( prev. line ) it is trying to print the page for some reason. So, 2 questions. Is there a way to make it stop that, and is there a way to just run it from the command line, or without all of the fancy mouse stuff? Essentially as --no-windows? Thanks!

    Read the article

  • Apache 2.4.2 with PHP 5.4.4 crashes as soon as phpinfo() script is opened

    - by Gremo
    As soon as i open a phpinfo() script called version.php apache stops working. Here is the error.log file. My configuration (my local development machine): Windows 7 Home Premium x64 SP1 with latest updates Apache 2.4.2 win32 from Apache Lounge PHP 5.4.4 VC9 x86 TS from PHP For Windows php5apache2_4.dll (PHP 5.4.4) taken from Apache Lounge php5apache2_4.dll-php-5.4-win32.zip PATH environment variable is PATH=C:\WAMP\Apache\bin;C:\WAMP\PHP and installation folders (unzipping) are: C:/WAMP/Apache C:/WAMP/PHP Microsoft Visual C++ 2010 SP1 x86/x64 installed and updated. So everything looks fine to me. PHP integration in httpd.conf is: # Integrazione PHP LoadModule php5_module "C:/WAMP/PHP/php5apache2_4.dll" PhpIniDir "C:/WAMP/PHP" AddType application/x-httpd-php .php <IfModule dir_module> DirectoryIndex index.html index.php </IfModule>

    Read the article

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • User account shows two Downloads folders

    - by Chris Lieb
    I have my user account on my D drive and junction'd to the C:\users folder. I accidentally moved my profile Downloads folder (C:\users\me\Downloads) and then moved it back to its path on the D drive (C:\me\Downloads). After doing this, the directory tree for my user profile lists two Downloads directories, one located at C:\users\me and one at D:\me. I tried deleting the directory from the D drive, then restoring it from the Recycle Bin to the proper location on the C drive (actually the D drive, accessed through the junction), but it gave me the two Downloads directories again. Is there some way to fix this so that the only listing is for the C:\users\me\Downloads directory, like it was to begin with?

    Read the article

  • How to fix Windows 2008 R2 BOOTMGR is missing

    - by RichardTheKiwi
    BOOTMGR IS MISSING PRESS CTRL+ALT+DEL TO RESTART Note: This is a VM on VMWare ESX server, but that should not matter I put in the 2008 R2 x64 install dvd and can get to recovery, but it lists no Operating Systems. Clicking on Next brings me to +=========================== System Recovery Options +=========================== Choose a recovery tool Operating system: Unknown or (Unknown) Local Disk ..... Command Prompt I start the command prompt, go to C:\ and perform a dir /a Apart from files I put there myself, these are showing $Recycle.Bin Documents and Settings [C:\Users] Program Files Program Files (x86) ProgramData Recovery System Volume Information Temp Users Windows Where to go next? Is it like the NTLDR problem with Windows 2003 where I can just drop a file in there and it will be hunky dory again?

    Read the article

  • Run a script after killing lxsession (xorg)

    - by user284194
    I am trying to run a program automatically within a bash script after killing the LXDE session. My script consists of: #!/bin/sh pkill lxsession; sh /home/pi/RetroPie/EmulationStation/emulationstation My aim is to log out of the LXDE session and run EmulationStation on my Raspberry Pi with a bash script. I'm using pkill lxsession; to bypass lxsession's logout confirmation dialog. As it stands, this script just gets me to the command line from a working LXDE desktop. Thanks for reading.

    Read the article

  • ssh-agent is broken after running Meerkat - can connect to git in terminal but not in Tower - no keychain access

    - by marblegravy
    My mac running Snow leopard 10.6.8 is having trouble handling it's ssh keys. I could previously access all my git repo's via Tower without an issue. The other day I ran Meerkat to see what it was about and it looks like it has broken the way ssh works. Terminal doesn't seem to have a problem and can still connect to Git, but it can't access the keychain. Tower doesn't seem to be able to access anything. The Tower support crew have been super helpful, but I wanted to float this here and see if anyone has any ideas on how to fix my problem. The only hints I have are: $ which ssh returns: /usr/bin/ssh and echo $SSH_AUTH_SOCK returns: /tmp/ssh-nBhRYVEg8t/agent.199 (This one seems to be wrong as I think it's supposed to point to a Listener, but no idea how to fix it) additional: Keychain first-aid finds no problems. The problem seems to be that ssh-agent is not being run properly... but that's just a guess.

    Read the article

  • FREEBSD creating new port

    - by su55
    Hi, I have a script here that I want to create as a port in freebsd and then make it as package so that I can install on some machines. script is below. !/usr/local/bin/bash if [ ! -f "/suid.old" ] then find / -perm -4000 -o -perm -2000 -ls /suid.old else find / -perm -4000 -o -perm -2000 -ls /suid.new diff suid.old suid.new newchanges fi exit 0 if [ -s "/newchanges" ] then mail -s "changes has occured" [email protected] else mail -s "No changes has occured" [email protected] /newchanges fi How can I accomplish this?

    Read the article

  • Startup script on Ubuntu 12.04 not getting executed. Dependencies / load order.

    - by user861181
    I want to create a simple startup script on Ubuntu 12.04: myscript.sh #!/bin/sh sudo /etc/init.d/nginx start cd ~/app/current god -c config/resque.god sudo /etc/init.d/redis-server start echo "SCRIPT RUN" I have it at /etc/init.d/myscript.sh When I do sudo chkconfig --level 2345 myscript.sh I get myscript.sh 2345 When I do sudo chkconfig --add myscript.sh I get insserv: warning: script 'K01myscript.sh' missing LSB tags and overrides insserv: warning: script 'myscript.sh' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dbus' missing LSB tags and overrides .... myscript.sh 0:off 1:off 2:on 3:on 4:on 5:on 6:off ** EDIT:: I checked the boot.log and it turns out that the script is run, but the problem is that god is not loaded yet when the script is executed. Apparently I want to load this script as the very last thing at startup (or somehow check if god is loaded and then start the script).

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • GDM login screen is not displayed with VNC

    - by niboshi
    Hi, I set up VNC server with xinetd. Also configured GDM so that XDMCP is enabled. VNC connection seems okay, but GDM login screen is not shown. Instead I can only see old bare X screen (gray meshed background and X-shaped mouse pointer), which I can't do any interaction with it. What can I do to fix the problem? No log is written below /var/log/. Server distribution: Ubuntu marverick /etc/xinetd.d/vnc is like below: service vnc1024 { disable = no socket_type = stream protocol = tcp wait = no user = nobody server = /usr/bin/Xvnc server_args = -inetd -query localhost -geometry 1024x768 -depth 24 -once securitytypes=none port = 12345 } /etc/gdm/custom.conf: [daemon] [security] DisallowTCP=false [xdmcp] Enable=true [gui] [greeter] [chooser] [debug] [servers] /etc/services is also configured. Thanks

    Read the article

  • crontab environment

    - by Adamski
    I have written various scripts to launch Java server applications, which are typically run for 24 hours before being shut down (by invoking the same script with a different parameter). The script relies on environment variables defined in a file: ~/<user>.env, which I source from .bashrc. This works fine when invoking the script from the command line but if I want to add the script as a crontab entry I run into the problem where .bashrc isn't read. My question: What is the best practice approach for solving this problem? I realise I could define a crontab entry such as: * * * * 1-5 /usr/bin/bash -c '. /home/myuser/myuser.env && /home/myuser/scripts/myscript.sh' ... but this seems plain ugly. Alternatively I could source myuser.env at the beginning of every script, but this would become a nightmare to maintain. Any help appreciated.

    Read the article

  • Sun Java keytool importing EV certificates into a single keystore

    - by ss0
    At my current job we are using tomcat, customers have custom web portals setup on their own local machines. EV certs are new to me, they have 2 part intermediary and a primary certificate. For our product to work it appears I need to get all three parts installed under a single keystore entry. How can I roll all three parts into a single x.509 compliant file for import? They syntax I am using is as follows: /blah/system/j2sdk/bin/keytool -import -alias foo -keystore /zix/system/jdk1.5.0_06/jre/lib/security/cacerts -file certname.pem -trustcacerts where foo = the keystore name and certname.pem is the main cert. I have tried importing the intermediate certs under their own names into the keystore and I don't know if it's just the product I have to work with (not vanilla tomcat) or what but it doesn't see those. I have seen a working system and all three certs were under the single keystore alias. Anyone have any ideas?

    Read the article

  • Run Bash Script Another Server

    - by psce
    I want to run command one by one, for change the names of the directories on the server. When I run script, directories renamed in server 1. But, directories are not found in server 2. What the error could be in the script? Script; #!/bin/bash mach_directory=/home/user/example erase_dir1=cache erase_dir2=tmp for i in {0..10} do user=user server=$(ssh $user@server$i hostname) ssh $user@$server find $mach_directory -type d -name $erase_dir1 ! -path "*Admin/$erase_dir1*" -print0 | while IFS= read -r -d '' file ; do mv "$file" "${file}_$(date +%d%m%Y)"; done ssh $user@$server find $mach_directory -type d -name $erase_dir2 ! -path "*Admin/$erase_dir2*" -print0 | while IFS= read -r -d '' file ; do mv "$file" "${file}_$(date +%d%m%Y)"; done done

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >