Search Results

Search found 12017 results on 481 pages for 'root'.

Page 276/481 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • using python Paramiko for ssh: sudo: no tty present and no askpass program specified

    - by misteryes
    I want to use paramiko to ssh into a bunch a remote nodes and run some command line with root priviledge I have ssh key in my home directory and so i don't need to input password when I ssh into those remote nodes but when running the following script: def connect(hostname): ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(hostname, username='niky', pkey=paramiko.RSAKey.from_private_key(open('id_rsa'), 'passwd'), timeout = 240.0) return ssh def run(hostname): ssh = connect(hostname) (stdin, stdout, stderr) = ssh.exec_command("sudo ls") res = stderr.readlines() print hostname+': '+''.join(str(elem) for elem in res)+'\n' run(remote.nity.com) I got the following error: remote.nity.com: sudo: no tty present and no askpass program specified if I don't add sudo before ls everything works fine what are potential reasons ? thanks!

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • Sharing an Apache configuration between testing vs. production

    - by Kevin Reid
    I have a personal web site with a slightly nontrivial Apache configuration. I test changes on my personal machine before uploading them to the server. The path to the files on disk and the root URL of the site are of course different between the test and production conditions, and they occur many places in the configuration (especially <Directory blocks for special locations which have scripts or no directory listing or ...). What is the best way to share the common elements of the configuration, to make sure that my production environment matches my test environment as closely as possible? What I've thought of is to use SetEnv to store the paths for the current machine in environment variables, then Include a common configuration file with ${} everywhere there's something machine specific. Any hazards of this method?

    Read the article

  • Cant connect to asterisk internal database [on hold]

    - by Bilbo
    Im trying to get a PHP script to connect to Asterisks internal mysql database. I tried the to use the standard method for example $con = mysqli_connect("192.168.1.126","root","mysql","asterisk"); However when I log into the asterisk server to access the mysql database all i need it to type "mysql" and im logged in. Im wondering is it possible for my php script to connect to asterisk internal database. The following error is shown: Warning: mysqli_connect(): (HY000/2003): Can't connect to MySQL server on '192.168.1.126' (111) in /var/www/html/project/sipSubScript.php on line 6 Failed to connect to MySQL: Can't connect to MySQL server on '192.168.1.126' (111)

    Read the article

  • SVN multiple repositories in subfolders

    - by fampinheiro
    I'm using apache+svn apache config file: LoadModule dav_module modules/mod_dav.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /code> DAV svn SVNParentPath "c:/repositories" </Location> Imagine i have this file structure (in every t? i have one svn repository) c repositories uc1 0809v t1 t2 t3 0809i t1 t2 uc2 t1 t2 t1 I can access the repositories using: svn://domain.com/code/uc1/0809v/t1 svn://domain.com/code/uc1/0809v/t2 svn://domain.com/code/uc1/0809v/t3 I want to access them using the urls: http://domain.com/code/uc1/0809v/t1 http://domain.com/code/uc1/0809v/t2 http://domain.com/code/uc1/0809v/t3 and see the content of the repository in the browser. If i create the repository on the root of the svn folder i can see the repository (http://domain.com/code/t1) when i try the other urls i get the error Could not open the requested SVN filesystem My question is, It is possible to do a search in all subfolders looking for svn repositories?

    Read the article

  • How to set up different documentroot for ip based requests, and different for domain based requests

    - by Carlos
    My problem is simply that I have a domain, let's say example.com, and my server's ip address is e.g. 192.168.0.1. I want to set up 2 different virtual hosts, so when user enters ip address (192.168.0.1) in his browser, he will see content from here: /var/www/staging But if user will type example.com, he will see content from here: /var/www I think it's possible but I was playing around with it and couldn't make it work. Also I don't want to make simple redirection. I know I can, but I need both of my apps (live & staging) working in root on the same server. I can't buy second domain, and I can't associate new ip address.

    Read the article

  • Setting up Samba shares on a Linux VPS

    - by 101265052760541259879
    Hi, I'm trying to set up a folder that can be accessed via Windows clients over the net on my Linux VPS on which our companies website resides. I know a little bit about Linux, and have used Samba before to browse Windows shares from a Linux laptop. I'm guessing it's possible to do the reverse - to share a folder from Linux TO a Windows client. I have root SSH access to the VPS, would anyknow know what steps I need to take to set up the share, and how I can secure it, ideally with a simple username/password so the Windows clients can connect easily? Many thanks, Jack

    Read the article

  • Setting Password for phpMyAdmin

    - by anitha
    am using rhel 5 and php 5 with mysql 5. My server is already configured and running all applications smoothly. I am accessing mysql as root and password is 'anitha123'. but when i am accessing phpmyadmin through browser, it is not asking for password. Somebody please tell me how can i set it like prompting for username ans password. Since i am not familiar with php mysql please tell me how to do it in simple way. thanks and regards a anitha

    Read the article

  • Extract cert and private key from JKS keystore to use it in Apache2 httpd

    - by momo
    I tried to find this but no luck. I created a JKS keystore and generated a CSR, then imported the signed cert and intermediate and root CA certs. Used this keystore on Tomcat without problems. Now I want to use the same cert for Apache2 http server on the same machine. I actually want to set up mod_jk to redirect /*.jsp and servlets paths to Tomcat and serve the static content and PHP from Apache2. I tried to convert JKS to PKCS12 with keytool to afterwards handle it with openssl with a command like this: keytool -importkeystore -srckeystore foo.jks \ -destkeystore foo.p12 \ -srcstoretype jks \ -deststoretype pkcs12 The problem is only the cert is exported but not the rest of the chain. I actually used this keystore on Apache and it complained about key and cert don't matching (not sure if it's related to the chain or not). Can anyone point me on the right direction? I am not a server guy and I am kinda lost with all this things :-(

    Read the article

  • Bash Script Exits su or ssh Session Rather than Script

    - by Russ
    I am using CentOS 5.4. I created a bash script that does some checking before running any commands. If the check fails, it will simply exit 0. The problem I am having is that on our server, the script will exit the su or ssh session when the exit 0 is called. #!/bin/bash # check if directory has contents and exit if none if [ -z "`ls /ebs_raid/import/*.txt 2>/dev/null`" ]; then echo "ok" exit 0 fi here is the output: [root@ip-10-251-86-31 ebs_raid]# . test.sh ok [russ@ip-10-251-86-31 ebs_raid]$ as you can see, I was removed from my sudo session, if I wasn't in the sudo session, it would have logged me out of my ssh session. I am not sure what I am doing wrong here or where to start.

    Read the article

  • Can not su to normal user

    - by Summer Nguyen
    I have a centos 5.8 box with gitolite installed . It worked fine until I yesterday my gitolite didn't work. ( fatal the remote end hung up unexpectedly) I logged to the box using root account. and then su to git user but I can't. I test again by creating a new user , but I also can not su to that user. Any idea ?, thank you very much. P.S: I installed postfix the day before , but I'm not sure if postfix cause the problem.

    Read the article

  • Run Logstalgia on Remote Global Apache Log On a WHM System

    - by macinjosh
    I work for a small web development shop. We have a dedicated Linux server running WHM. For fun we want to run Logstalgia on a machine in our office. We'd really like it to display information about all the traffic on our server. Logstalgia use Apache's access logs to generate its visuals, the problem I have is that by default WHM does not have an access log for all sites combined. How can I safely configure our server to output a combined/global Apache access log in a place accessible by a non-root SSH user? I am also concerned that this file could get quite large so I think I'd also need to know how to have it automatically shed old information. To make things more interesting I'm a programmer not a sys admin so not everything is immediately obvious to me.

    Read the article

  • Freenas 8 email setup

    - by atrueresistance
    I'm struggling with setting up email reporting in Freenas. My build is FreeNAS-8.0.4-RELEASE-x64 (10351). I have my IPv4 Default gateway set to 192.168.2.1 (my router) and Nameserver 1 as 8.8.8.8 (google's public). Under my email tab I have from email ***@gmail.com outgoing mail server smtp.google.com port to connect to 465 tls/ssl SSL use smtp auth checked username ***@gmail.com password **** I then went into accounts and changed the root email to ***@gmail.com. When I try and send a test email, I get Your test email could not be sent: timed out So what am I doing wrong?

    Read the article

  • How to add nvidia drivers after previous failure with linux mint?

    - by LessThanMe
    Before today, I had perfectly good drivers from nvidia for my linux mint (15) box. I decided to update it because my performance in TF2 is less than stellar, and then things went south. I used synaptic to install nvidia-331 and then rebooted, but when I selected Mint in GRUB I waited...and waited...and waited. Nothing happened, but the display stayed on (a completely black video was being output). So I went into recovery mode from GRUB, went to root access, and apt-get remove --purge nvidia*'d my way out of that mess, and installed nvidia-common. Now my performance in graphic intensive stuff (read: games, blender) sucks, so I've been through the same thing a few times trying to re-install nvidia-current. I just want to get it back how it was. Thanks for any help! Nvidia GTX 560

    Read the article

  • Setting alias for DynDNS domain

    - by metalball
    Hey all, I've created DynDNS domain for testing my local sites, and i'm having trouble with pointing to root domain. From my registrar (GoDaddy) I've created a CNAME for www to point my example.dyndns.com so going to url www.example.com I'm reaching my site. But if I'm going to example.com I'm reaching to the IP of the A record. I can't set the IP for the A record to be my IP because I have dynamic IP, and it changes constatly, and I can't point the A record to domain, only IP. When trying to create CNAME record @ to point example.dyndns.com I'm getting error "A record of a different type exists for the hostname @, could not create CNAME" The only record using the '@' host are NS record, which I can't delete, and when tried to set another NS record with @ point to example.dyndns.com, I've lost connection to my site :) So what can I do to get example.com url reach my site? Thanx!

    Read the article

  • Auto-Attach EBS-volume to a New Spot Instance?

    - by Jeff
    I am experimenting with EC2 spot instances, and am needing some data to be retained between terminations. Now as I understand it, when the current price goes above my max. bid, it will be automatically terminated. I assume any init scripts I have will be run on shutdown so I can push data off to the EBS before unmounting. My question is, how can I automatically mount the same EBS volume on the new spot instance once the price goes down, since it won't have any of my init scripts that I would've loaded onto the root volume the first time? Do I have to create a custom AMI, or is there some other way to achieve this?

    Read the article

  • speeding up cvs pserver over WAN

    - by Aleksandar Ivanisevic
    I have a remote location with 5 developers that are using a CVS (pserver) repository in main office via WAN, but the bandwidth i have is only a couple of hundred kbps so CVS operations are fairly slow. Is there any way to speed this up? I can rsync over a local copy of the CVS root(s) every couple of minutes, so this can handle the updates, but obviously not the commits. Is there any way to say to CVS to update from one server, but push to another one? Any other ideas?

    Read the article

  • Active directory authentication for Ubuntu Linux login and cifs mounting home directories...

    - by Jamie
    I've configured my Ubuntu 10.04 Server LTS Beta 2 residing on a windows network to authenticate logins using active directory, then mount a windows share to serve as there home directory. Here is what I did starting from the initial installation of Ubuntu. Download and install Ubuntu Server 10.04 LTS Beta 2 Get updates # sudo apt-get update && sudo apt-get upgrade Install an SSH server (sshd) # sudo apt-get install openssh-server Some would argue that you should "lock sshd down" by disabling root logins. I figure if your smart enough to hack an ssh session for a root password, you're probably not going to be thwarted by the addition of PermitRootLogin no in the /etc/ssh/sshd_config file. If your paranoid or not simply not convinced then edit the file or give the following a spin: # (grep PermitRootLogin /etc/ssh/sshd_conifg && sudo sed -ri 's/PermitRootLogin ).+/\1no/' /etc/ssh/sshd_conifg) || echo "PermitRootLogin not found. Add it manually." Install required packages # sudo apt-get install winbind samba smbfs smbclient ntp krb5-user Do some basic networking housecleaning in preparation for the specific package configurations to come. Determine your windows domain name, DNS server name, and IP address for the active directory server (for samba). For conveniance I set environment variables for the windows domain and DNS server. For me it was (my AD IP address was 192.168.20.11): # WINDOMAIN=mydomain.local && WINDNS=srv1.$WINDOMAIN If you want to figure out what your domain and DNS server is (I was contractor and didn't know the network) check out this helpful reference. The authentication and file sharing processes for the Windows and Linux boxes need to have their clocks agree. Do this with an NTP service, and on the server version of Ubuntu the NTP service comes installed and preconfigured. The network I was joining had the DNS server serving up the NTP service too. # sudo sed -ri "s/^(server[ \t]).+/\1$WINDNS/" /etc/ntp.conf Restart the NTP daemon # sudo /etc/init.d/ntp restart We need to christen the Linux box on the new network, this is done by editing the host file (replace the DNS of with the FQDN of the windows DNS): # sudo sed -ri "s/^(127\.0\.0\.1[ \t]).*/\1$(hostname).$WINDOMAIN localhost $(hostname)/" /etc/hosts Kerberos configuration. The instructions that follow here aren't to be taken literally: the values for MYDOMAIN.LOCAL and srv1.mydomain.local need to be replaced with what's appropriate for your network when you edit the files. Edit the (previously installed above) /etc/krb5.conf file. Find the [libdefaults] section and change (or add) the key value pair (and it is in UPPERCASE WHERE IT NEEDS TO BE): [libdefaults] default_realm = MYDOMAIN.LOCAL Add the following to the [realms] section of the file: MYDOMAIN.LOCAL = { kdc = srv1.mydomain.local admin_server = srv1.mydomain.local default_domain = MYDOMAIN.LOCAL } Add the following to the [domain_realm] section of the file: .mydomain.local = MYDOMAIN.LOCAL mydomain.local = MYDOMAIN.LOCAL Conmfigure samba. When it's all said done, I don't know where SAMBA fits in ... I used cifs to mount the windows shares ... regardless, my system works and this is how I did it. Replace /etc/samba/smb.conf (remember I was working from a clean distro of Ubuntu, so I wasn't worried about breaking anything): [global] security = ads realm = MYDOMAIN.LOCAL password server = 192.168.20.11 workgroup = MYDOMAIN idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = yes winbind enum groups = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes client ntlmv2 auth = yes encrypt passwords = yes winbind use default domain = yes restrict anonymous = 2 Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start Setup the authentication. Edit the /etc/nsswitch.conf. Here are the contents of mine: passwd: compat winbind group: compat winbind shadow: compat winbind hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files Start and stop various services. # sudo /etc/init.d/winbind stop # sudo service smbd restart # sudo /etc/init.d/winbind start At this point I could login, home directories didn't exist, but I could login. Later I'll come back and add how I got the cifs automounting to work. Numerous resources were considered so I could figure this out. Here is a short list (a number of these links point to mine own questions on the topic): Samba Kerberos Active Directory WinBind Mounting Linux user home directories on CIFS server Authenticating OpenBSD against Active Directory How to use Active Directory to authenticate linux users Mounting windows shares with Active Directory permissions Using Active Directory authentication with Samba on Ubuntu 9.10 server 64bit How practical is to authenticate a Linux server against AD? Auto-mounting a windows share on Linux AD login

    Read the article

  • SSH keys fail for one user

    - by Eli
    I just set up a new Debian server. I disabled root SSH and password auth, so you've gotta use a key file. For my primary user, everything works exactly as expected. I used ssh-keygen -t dsa and got myself a public and private key. Put one in authorized keys, put the other in a pem file locally. I wanted to create a user that I can deploy things with, so I did basically the same process. I addusered it, made a .ssh folder, ran ssh-keygen -t dsa (I also tried RSA), put the keys in their appropriate locations. No luck. I'm getting a Permission denied (publickey) error. When I use the exact same keys as the account that works, same error. When I enable password authentication, I can log in via SSH with the password. How do I debug this?

    Read the article

  • Child Folder inheriting a permission that parent folder does not have (NTFS)

    - by just.another.programmer
    I'm reconfiguring roaming profiles on my network to use proper NTFS security settings according to this article. I have reset the following permissions on the roaming profile parent folder: CREATOR OWNER, Full Control, Subfolder and files only User group with profiles, List folder, Create folders, This folder only System, Full Control, This folder, subfolders, and files Then I select one of the actual roaming profile folders and follow these steps to fix the NTFS settings: Click Security, Advanced Uncheck "Allow inheritable permissions..." Choose "Remove..." Recheck "Allow inheritable permissions..." Click "Apply" After I choose apply, I get the following permissions listed on the roaming profile folder: Administrators (MYDOMAIN\Administrators) Full Control, This folder only CREATOR OWNER, Full Control, Subfolders and files only System, Full Control, This folder, subfolders, and files Where is the Administrators entry coming from!? There is an entry on the root of the drive for Administrators to have full control, but the Roaming Profile Parent folder is not set to inherit any permissions, and it does not have the administrators permission.

    Read the article

  • Apache disabled virtual host domains resolve an enabled virtual host

    - by littleK
    I have three virtual hosts defined on apache on my Ubuntu server for three different domains. If I disable two of the virtual hosts (a2dissite) and try to resolve those two URL's in the browser, then the one remaining enabled site will resolve. How can I configure apache so that the domains for the disabled virtual hosts do not resolve? This is how all 3 virtual hosts are configured (info is masked): # domain: myfirstdomain.com # public: /home/me/public/myfirstdomain.com/ <VirtualHost *:80> # Admin email, Server Name (domain name), and any aliases ServerAdmin [email protected] ServerName www.myfirstdomain.com ServerAlias myfirstdomain.com # Index file and Document Root (where the public files are located) DirectoryIndex index.html index.php DocumentRoot /home/me/public/myfirstdomain.com/public # Log file locations LogLevel warn ErrorLog /home/me/public/myfirstdomain.com/log/error.log CustomLog /home/me/public/myfirstdomain.com/log/access.log combined </VirtualHost>

    Read the article

  • Postfix configuration problem

    - by dhanya
    Can anyone help me by giving your postfix configuration file as a reference so that I can find my mistakes? I'm working on SUSE Linux Enterprise Server. My goal is to set up a mailserver in a campus network. Postfix shows it is running but no mail is sent to var/spool/mail I send mail using mail command at terminal. Here is my main.cf file, please help me finding a solution: readme_directory = /usr/share/doc/packages/postfix-doc/README_FILES inet_protocols = all biff = no mail_spool_directory = /var/mail canonical_maps = hash:/etc/postfix/canonical virtual_alias_maps = hash:/etc/postfix/virtual virtual_alias_domains = hash:/etc/postfix/virtual relocated_maps = hash:/etc/postfix/relocated transport_maps = hash:/etc/postfix/transport sender_canonical_maps = hash:/etc/postfix/sender_canonical masquerade_exceptions = root masquerade_classes = envelope_sender, header_sender, header_recipient myhostname = cmail.cetmail delay_warning_time = 1h message_strip_characters = \0 program_directory = /usr/lib/postfix inet_interfaces = all #inet_interfaces = 127.0.0.1 masquerade_domains = cetmail mydestination = cmail.cetmail, localhost.cetmail, cetmail defer_transports = mynetworks_style = subnet disable_dns_lookups = no relayhost = postfix mailbox_command = cyrus mailbox_transport = strict_8bitmime = no disable_mime_output_conversion = no smtpd_sender_restrictions = hash:/etc/postfix/access smtpd_client_restrictions = smtpd_helo_required = no smtpd_helo_restrictions = strict_rfc821_envelopes = no smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination smtp_sasl_auth_enable = no smtpd_sasl_auth_enable = no smtpd_use_tls = no smtp_use_tls = no alias_maps = hash:/etc/aliases mailbox_size_limit = 0 message_size_limit = 10240000

    Read the article

  • fedora 11 server won't boot from SATA disk, won't boot from CD, BIOS configuration problems

    - by Tom
    Hi all, Yesterday our fc11 file/print server didn't boot, and had stopped on the BIOS page with a configuration problem. (with a distinct lack of foresight) I reset the BIOS settings to default without recording the message and booted the server. The server ran until it was to be booted this morning, and it was failing to mount the root partition from the SATA disk. It also failed to boot from a known good diagnostics CD. After a few more tries, it now fails part way through the Phoenix - AwardBIOS screen where it is listing the SATA/IDE devices, and it is showing garbage for the identity of one of the disks, which should actually be "none" It looks like the motherboard has gone kaput. The motherboard is an EVGA NF790i, are there any diagnostic tools that I can use to determine this? (as I would prefer to not send the motherboard back, only to discover that it is the RAM or the CPU) ps I can't get it to boot from the memTest disk, so I can't run that diagnostic. Thanks!

    Read the article

  • Switch HTTP Virtual Host to HTTPS Virtual Host

    - by Kumar
    We are using apache web server with RHEL 6.4. Currently we configured Name based virtual host on server. Enabled "Namebased Virtualhost *:80" and created /etc/httpd/conf.d/example-com.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/vhost/example.com/public_html ServerName www.example.com ServerAlias example.com ErrorLog /var/log/httpd/vhosts/example-com-error_log CustomLog /var/log/httpd/vhosts/example-com-access_log common </VirtualHost> I've located my ssl certificates by /etc/httpd/conf.d/ssl.conf In this case how can I switch 80 site to 433 and also redirect all requests to 80 to 443

    Read the article

  • If sudo is broken, what should be used instead?

    - by ivant
    I found the following answer to a FAQ question about a security problem in Oprofile: This "problem" only occurs if you actively, and mistakenly, configure access to OProfile via sudo. OProfile uses shell scripts which have not been audited (nor is it likely to happen) for use through the broken sudo facility (anything that lets you alter root's path arbitrarily counts as horribly broken). Do not use sudo! As I see it, the author of the answer suggests that sudo is broken, so that it should not be used not only with oprofile, but for other purposes as well. Are there better alternatives to sudo in Linux?

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >