Search Results

Search found 6397 results on 256 pages for 'ssh agent'.

Page 62/256 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • chef clients behind firewall

    - by tec
    I am currently learning about chef. What I understood so far: I have to install chef-server on an own server or use the hosted chef. I have to install chef-client on the servers that I want to manage aka nodes (manually or using knife bootstrap). I installed several chef tools on my own PC that I can use to manage the nodes, e.g. knife. Now in my case the specialty is that the nodes are behind a firewall/load balancer/proxy. The nodes can access servers on the outside via NAT (http works and I can configure chef-specific hosts to work as well). However they can only be contacted from the outside via a ssh tunnel. There is really much documentation about chef available but I did not find an answer to these questions: When using knife, is it enough when I set up a ssh tunnel manually on my own PC or does the chef server need to contact the nodes? When using knife, can I configure it to setup a ssh tunnel automatically? When using the chef server web ui can I configure it to connect to the nodes via ssh tunnel or do I need a setup where I setup the tunnel myself e.g. using monit? Is this even possible with hosted chef? Instead of using knife or the web ui: Can I issue the same management commands directly on the nodes using chef-client? What solution would you recommend? Thanks a lot for taking your time to help and answering one or more of these related questions

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • rhel configure: limit root direct login to systems except through system consoles

    - by zhaojing
    I have to configure to limit root direct access except system consoles. That is, the ways of telnet, ftp, SSH are all prohibited. Root can only login through console. I understand that will require me to configure the file /etc/securetty. I have to comment all the tty, just keep "console" in /etc/securetty. But from google, I found many peoples said that configure /etc/securetty will not limit the way of SSH login. From my experiment, I found it is. (configure /etc/securetty won't limit SSH login). And I add one line in /etc/pam.d/system-auth: auth required pam_securetty It seems root SSH login can be prohibited. But I can't find the reason: What is the difference of configure pam_securetty and /etc/securetty? Can anyone help me with this? Only configure /etc/securetty could work? Or Have I to configure pam_securetty at the same time? Thanks a lot!

    Read the article

  • Using Windows as a gateway to the internet

    - by James Wright
    My customer currently blocks outbound RDP and SSH, which means that none of their employees can get access to external Windows and Linux boxes (at the console level). However, a need has recently arisen to give access to an assortment of RDP and SSH endpoints scattered throughout the internet. The endpoint IP addresses are a moving target, and an access list exists to define what those IP addresses are. So now my customer wants to have a single Windows Server that they control as the sole outbound point for RDP/SSH to the internet. Consider it a jump box to the internet. If one of our admins have an access to this Windows box then they can log on, and from there bounce around to RDP/SSH endpoints on the internet. Is a standard Windows 2008 box going to work as a jump box? For example, I seem to recall that Win2k8 limits the number of users that can log on simultaneously, which means that the jump box may not be accessible if lots of users are on it. Advice as to how to make this work..?

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • dynamiclly schedule a lead sales agent

    - by Josh
    I have a website that I'm trying to migrate from classic asp to asp.net. It had a lead schedule, where each sales agent would be featured for the current day, or part of the day.The next day a new agent would be scheduled. It was driven off a database table that had a row for each day in it. So to figure out if a sales agent would show on a day, it was easy, just find today's date in the table. Problem was it ran out rows, and you had to run a script to update the lead days 6 months at a time. Plus if there was ever any change to the schedule, you had to delete all the rows and re-run the script. So I'm trying to code it where sql server figures that out for me, and no script has to be ran. I have a table like so CREATE TABLE [dbo].[LeadSchedule]( [leadid] [int] IDENTITY(1,1) NOT NULL, [userid] [int] NOT NULL, [sunday] [bit] NOT NULL, [monday] [bit] NOT NULL, [tuesday] [bit] NOT NULL, [wednesday] [bit] NOT NULL, [thursday] [bit] NOT NULL, [friday] [bit] NOT NULL, [saturday] [bit] NOT NULL, [StartDate] [smalldatetime] NULL, [EndDate] [smalldatetime] NULL, [StartTime] [time](0) NULL, [EndTime] [time](0) NULL, [order] [int] NULL, So the user can schedule a sales agent depending on their work schedule. Also if they wanted to they could split certain days, or sales agents by time, So from Midnight to 4 it was one agent, from 4-midnight it was another. So far I've tried using a numbers table, row numbers, goofy date math, and I'm at a loss. Any suggestions on how to handle this purely from sql code? If it helps, the table should always be small, like less than 20 never over 100. update After a few hours all I've managed to come up with is the below. It doesn't handle filling in days not available or times, just rotates through all the sales agents with leadTable as ( select leadid,userid,[order],StartDate, case DATEPART(dw,getdate()) when 1 then sunday when 2 then monday when 3 then tuesday when 4 then wednesday when 5 then thursday when 6 then friday when 7 then saturday end as DayAvailable , ROW_NUMBER() OVER (ORDER BY [order] ASC) AS ROWID from LeadSchedule where GETDATE()>=StartDate and (CONVERT(time(0),GETDATE())>= StartTime or StartTime is null) and (CONVERT(time(0),GETDATE())<= EndTime or EndTime is null) ) select userid, DATEADD(d,(number+ROWID-2)*totalUsers,startdate ) leadday from (select *, (select COUNT(1) from leadTable) totalUsers from leadTable inner join Numbers on 1=1 where DayAvailable =1 ) tb1 order by leadday asc

    Read the article

  • X11 from ssh on Mac OSX to Linux server doesn't work --- Gtk-WARNING **: cannot open display

    - by Cal
    Hello, I installed a program wireshark on my remote linux box and I'm trying to run it with X11 from my mac computer using SSH. Here's my terminal... macosx$ echo $DISPLAY /tmp/launch-f4w6k6/:0 macosx$ ssh -X [email protected] [email protected]'s password: remoteubuntu:~# echo $DISPLAY remoteubuntu:~# wireshark (wireshark:18927): Gtk-WARNING **: cannot open display: Here's a few lines from /etc/ssh/sshd_config X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no Thanks for the help!

    Read the article

  • Vagrant ssh fails with VirtualBox

    - by lukewm
    vagrant up fails when it gets to the ssh part: myterminal$ vagrant up [default] VM already created. Booting if its not already running... [default] Running any VM customizations... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- db2: 30003 => 30003 (adapter 1) [default] Cleaning previously set shared folders... [default] Creating shared folders metadata... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI. Then when I subsequently try and connect using vagrant ssh or vagrant reload or similar, I get this: myterminal$ vagrant reload [default] Attempting graceful shutdown of linux... SSH connection was refused! This usually happens if the VM failed to boot properly. Some steps to try to fix this: First, try reloading your VM with `vagrant reload`, since a simple restart sometimes fixes things. If that doesn't work, destroy your VM and recreate it with a `vagrant destroy` followed by a `vagrant up`. If that doesn't work, contact a Vagrant maintainer (support channels listed on the website) for more assistance. Please help! I'm really stumped. Kind regards, Luke

    Read the article

  • SSH broken after hostname change on EC2-hosted Ubuntu

    - by dimadima
    I changed my instance's hostname using the hostname utility and then set it in /etc/hostname so that the new name survives reboot. My main motivation was for differentiating between instances at the prompt using the \h format in PS1. EDIT I also changed permissions on my home directory. I made my home directory group writeable. END EDIT Now I can no longer SSH into the machine. The short of it is the error Permission denied (publickey). Running ssh -v, the more verbose output is: debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/dmitry/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/dmitry/.ssh/ec2key.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). Should I have done something after changing the hostname? Now I can't get into the instance! :(

    Read the article

  • Ericsson W35 ssh administration

    - by jblaster
    I picked up a Ericsson W35 at a pawn shop the other day and when I login to the administration section at 192.168.1.1 I get an error message about connecting to the database. It apparently supports ssh administration and I get a password prompt when attempting to ssh [email protected] but no passwords I try work and theres no documentation for it. Has anyone had success with ssh on the Ericsson W35 and is this issue fixable? Thanks.

    Read the article

  • how to escape the ' in ssh?

    - by Dean Hiller
    I need to escape the ' in this command for ssh exec grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= '{print $2}' How do I escape that? I currentl y have this which does not work ssh host 'grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= '{print $2}'' nor does this ssh host 'grep IPADDR /etc/sysconfig/network-scripts/ifcfg-eth0 |awk -F= \'{print $2}\'' thanks, Dean

    Read the article

  • Problem opening XWindows programs with xming and SSH Secure Shell

    - by Brian
    I've installed SSH Secure Shell and xming on my laptop running Windows 7 (64-bit). I'm having trouble starting X Windows applications from the SSH console. I've been able to do it in the past. I've pretty much determined that it's not a server issue because I've tried it on two different servers (both servers are running RHEL 5). Running "echo $DISPLAY" on either server gave me "localhost:10.0". My XLaunch configuration settings are: Multiple Windows, 10 (display number), and Start no client. Once xming has launched, I'll try to execute something like "firefox" and I get this back: The application 'firefox' lost its connection to the display localhost:10.0; most likely the X server was shut down or you killed/destroyed the application. I've already checked to make sure that the X server is running and it is: root 12579 2689 0 Feb14 tty7 00:04:23 /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 Additionally, X11 Tunneling has been enabled in SSH as well as SSH 2 connections.

    Read the article

  • How to setup heartbeat for IP fail over on SSH failure

    - by Tony
    I wonder if anyone can help me, I am trying to setup heartbeat on a redhat 5 to failover an IP address when ssh stops responding on a server. So basically you ssh to a VIP and then get put through which ever server has the floating ip. 192.168.0.100 | | /------------------------\ | /------------------------\ | Server 01 | | | Server 02 | | eth0 - 192.168.0.1 |-----/ | eth0 - 192.168.0.2 | | eth0:0 - 192.168.0.100 | | eth0:0 - down | \------------------------/ \------------------------/ if ssh stops responding i want eth0:0 to be brought up on the second machine to allow ssh connections to carry on being served. I have tried to follow some documents I have found online so here is my current configuration: ha.cf bcast eth0 keepalive 2 warntime 10 deadtime 30 initdead 120 udpport 694 auto_failback off node vm-bal01 node vm-bal02 debugfile /var/log/ha-debug logfile /var/log/ha-log authkeys auth 1 1 sha1 sshhhsecret1234 haresources server01 192.168.0.100/24/eth0:0/192.168.0.255 Hope someone can help as this is driving me nuts...

    Read the article

  • Should I block bots from my site and why?

    - by Frank E
    My logs are full of bot visitors, often from Eastern Europe and China. The bots are identified as Ahrefs, Seznam, LSSRocketCrawler, Yandex, Sogou and so on. Should I block these bots from my site and why? Which ones have a legitimate purpose in increasing traffic to my site? Many of them are SEO. I have to say I see less traffic if anything since the bots have arrived in large numbers. It would not be too hard to block these since they all admit in their User Agent that they are bots.

    Read the article

  • Telnet or SSH to Embedded Linux based NAS?

    - by ef2011
    Do you happen to know whether it is possible to telnet or SSH to LG N1A1DD1 NAS? It seems to have lots of features that I don't need (including FTP) but I couldn't find any mention of the ability to telnet or SSH to it. If telneting or SSH-ing to it isn't possible, do you know whether it is possible to configure via its web interface a script that can periodically back it up via its USB port? (while still functioning as NAS, of course)

    Read the article

  • Securing a persistent reverse SSH connection for management

    - by bVector
    I am deploying demo Ubuntu 10.04 LTS servers in environments I do not control and would like to have an easy and secure way to administer these machines without having to have the destination firewall forward port 22 for SSH access. I've found a few guides to do this with reverse port (e.g. howtoforge reverse ssh tunneling guide) but I'm concerned with security of the stored ssh credentials required for the tunnel to be opened automatically. If the machine is compromised (primary concern is physical access to the machine is out of my control) how can I stop someone from using the stored credentials to poke around in the reverse ssh tunnel target machine? Is it possible to secure this setup, or would you suggest an alternate method?

    Read the article

  • cygwin rsync over ssh very slow

    - by Waleed Hamra
    I have 2 machines running Windows Xp SP3. I have cygwin installed on both, version 1.7. I have rsync and ssh installed on both, and configured using default settings as per ssh-host-config and ssh-user-config programs provided. I moved the public keys into their respective locations, and basically ssh is working fine. i began an rsync operation, using: rsync -av --delete --hard-links local_dir username@other_machine:/some_dir well... on both machines, the processor is running near idle, no heavy usage. I checked IO using process explorer on both machines, and that too is at normal levels (1~2 MB/s), so I can't see where the bottlenecks are, because network performance is aweful. I'm not going over 1MB/s... when a normal file copy using windows sharing achieves some ~10 MB/s.. What could be wrong?

    Read the article

  • install avisynth under linux via ssh

    - by immabe
    i have a linux server (ubuntu os) to which i have access via ssh and wish to install avisynth on it. i know windows apps can be installed with the help of wine, but the problem is how i manage to install the app (avisynth) thru ssh, that is, without graphical intarface? can wine be somehow configured to manage such a case? what should i do to install avisynth via ssh? (am not insterested in other apps)

    Read the article

  • Print ssh and su chain

    - by user1824885
    Is there a way to show the complete ssh and su chain in bash? For example. In Server A as user aa: su - ab ssh ba@B su - bb Thus, I would like a command that prints something like this: 1 bash aa in A 2 su ab in A 3 ssh ba in B 4 su bb in B I tried pstree but it does not print the users and only works with the processes of the last ssh'ed server: $ pstree | grep -C 5 pstree serversshd---sshd---sshd---bash---su---bash-+-grep | `-pstree Thanks and regards.

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • Slow data transfer using SSH

    - by Floste
    The server is an ubuntu server 11.04 with sshd. SSH works fine for console programs. But data transfer is slow, which is very annoying when transferring large files. I tried two different client programs and changed the port, but the speed is always the same. I know the server can transfer data a lot faster over SSL, which afaik uses AES. I configured my SSH client to use AES, too, but no effect. Why is using SSH multiple times slower than SSL and is there a way to improve transfer speed of SSH?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >