Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 399/632 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • Bluehost Emails Getting Blocked

    - by colithium
    A site for my client has the run-of-the-mill "website with users" email pattern. Create an account, get an activation email. Get an email when a subscription is expiring, etc. The site is hosted on Bluehost and currently it uses php's mail() function. There isn't much configuration that is allowed (as far as I know). The trouble is, about a third of these emails disappear into the void. They aren't in spam or junk folders, there's no bounce message, they just cease to exist. I've read about Bluehost email troubles but I can't figure out what my options are for fixing it. These aren't marketing emails, ie they have user-specific information contained within them. I suppose if a solution offers a good templating system that would be fine. What are my options? Excerpt of headers when delivered to a Gmail address: Received-SPF: neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) client-ip=00.000.000.000; DomainKey-Status: good Authentication-Results: mx.google.com; spf=neutral (google.com: 00.000.000.000 is neither permitted nor denied by best guess record for domain of domain@box###.bluehost.com) smtp.mail=domain@box###.bluehost.com; domainkeys=pass [email protected]

    Read the article

  • What command should be used to connect via SSH through squid proxy?

    - by Raul Cardoso
    I have set up a Squid HTTP Proxy (in centOS) and intended to use it also for ssh connections. Managed to configure putty (in a windows client) to use this proxy while connecting by ssh. Confirmed in the "target host" that the ssh connection was coming from Proxy server ip instead of the windows client IP. Used: targethost: 22 for ssh proxyserv: 3128 for proxy (along with proxy credentials) I'm now having problems connecting to the "target host" using Ubuntu and the same proxy server. I have tried the following: me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 Enter proxy authentication password for test@proxyserv: SSH-2.0-OpenSSH_6.2p2 Ubuntu-6 It hangs in last line, expecting some input. All attempts resulted in a "Protocol mismatch." error. Putty successfully connects to the http proxy and sends credentials, showing me ssh login right away. - How to do (with commands in Ubuntu) the same putty does? - Is there any other way than connect-proxy command to do this? Edit: Also tried the following with same result ("Protocol mismatch") me@mycomp:~$ connect-proxy -H test@proxyserv:3128 targethost 22 ssh -l myshel_login Thanks in advance Edit: Solution details (thanks to NickW pointing the right way) installed corkscrew and added to ssh_config Host targethost ProxyCommand corkscrew proxyserv 3128 %h %p /etc/ssh/proxypass created proxypass file login:password Restarted ssh and used a simple ssh command ssh mylogin@targethost (ssh password was asked as usual)

    Read the article

  • Strange Apache Webdav situation (OSX Will connect, Ubuntu will not)

    - by mewrei
    So basically my situation is that I have an Apache 2.2 webserver running on Linux on another box, and I have it configured to serve up webdav. Now here's the weird part, I can access the server just fine on my Mac using the "Connect to Server" dialog (even moved like 5GB of files over the connection). On my Ubuntu desktop cadaver will connect as well and allow me to browse. However when I try to use Xmarks (BYOS Edition) or the GNOME "Connect to Server" dialog, it gives me a 403 Forbidden error. My server does digest authentication if that makes any difference. Here's part of my apache2.conf file <VirtualHost *:80> DocumentRoot "/path" <Directory "/path"> Dav on AuthType Digest AuthName iTools AuthDigestDomain "/" AuthUserFile /path/to/WebDavUsers Options None AllowOverride None <LimitExcept GET HEAD OPTIONS> require valid-user </LimitExcept> Order allow,deny Allow from All </Directory> <Directory "/path/*/Public"> Options +Indexes </Directory> <Directory "/path/user"> <LimitExcept GET HEAD OPTIONS> require user user </LimitExcept> </Directory> </VirtualHost>

    Read the article

  • Filezilla client unable to get directory listing from Filezilla Server (Windows)

    - by sestocker
    I've set up a self signed certificate in FileZilla server and enabled FTP over SSL/TPS. When I connect from the client FileZilla, I am able to authenticate but cannot get a directory listing: Status: Connecting to MY_SERVER_IP:21... Status: Connection established, waiting for welcome message... Response: 220-FileZilla Server version 0.9.39 beta Response: 220-written by Tim Kosse ([email protected]) Response: 220 Please visit http://sourceforge.net/projects/filezilla/ Command: AUTH TLS Response: 234 Using authentication type TLS Status: Initializing TLS... Status: Verifying certificate... Command: USER MYUSER Status: TLS/SSL connection established. Response: 331 Password required for MYUSER Command: PASS ******** Response: 230 Logged on Command: PBSZ 0 Response: 200 PBSZ=0 Command: PROT P Response: 200 Protection level set to P Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PORT 10,10,25,85,219,172 Response: 200 Port command successful Command: MLSD Response: 150 Opening data channel for directory list. Response: 425 Can't open data connection. Error: Failed to retrieve directory listing I have ports 21 and 50001 through 50005 open on the firewall. We are migrating servers - the 50001 - 50005 is one of the things that helped get FTPS working on the old server. I'm not sure this installation would use the same ports? What else could be the problem?

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • IIS 7.5 Siteminder is not protecting ASP.net MVC requests

    - by HariM
    We are trying to use ASP.Net MVC with Siteminder for Single Sign on. This is on Windows Server 2008 R2 with IIS 7.5. Siteminder Agent version 6QMR6. Problem : Siteminder protects physical files that are exist. And it is not protecting the folder when we try to access a non existed file. It must redirect to login page even if the file doesn't exist when the user is accessing a protected folder. How to configure in IIS 7.5 that Do not verify a file exist, before authentication by siteminder. SiteMinderWebAgent is a Handler(WildCard Script Map) we created using the ISAPI6WebAgent.dll How to Protect ASP.Net MVC Request with Siteminder? (Added this as My previous question did not solve the problem). MVC Request shows up in IIS Log but not in Siteminder log. Update : Microsoft Support says currently IIS7.5, even in earlier versions doesnt support wildcard mappings on any two Isapi Handlers with * wild card. Currently in my case Siteminder has * wildcard and asp.net mvc (handler is aspnet_isapi) has * wildcard to handle the reqeusts. Ordered priority doesnt work in the wild card mappings case with Just *. Did not convinced with the answer but will wait till tomorrow for them to get back.

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • Some clients cannot connect to Server 2008 R2 VPN

    - by Robl
    Hi all, Have a server 2008 r2 setup as a VPN server. We have created a windows group to control access to the VPN called vpn-users. Clients are all Windows 7 Pro. This all seems to work fine except some users cannot connect to the VPN! For example I try to logon to the VPN from a client and get an error saying the server refused the connect due to a policy in place. Specifically authentication type! Fine I think. So i drop that user into the vpn-users group created for this and try again and hey presto the user can now logon! Great. Now try this with another user. But this time I get the same error even though I have dropped them into the vpn-users group!! So does anyone have any idea why this works for some users and not for others?? I have tried moving the user from certain OU's in AD to others, copying the account, taking the user out of the vpn-users group and then back in but get the same error each time. Any thoughts anyone?

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • Windows Explorer slow to open networked computer, fast to navigate once opened

    - by Scott Noyes
    I open Windows Explorer and enter an IP for a computer on my home network (\\192.168.1.101). It takes 30 seconds or more to present a list of the shared folders. It does not appear to be an initial handshaking/authentication thing; even if I allow the view to load and then immediately load the same again, it is always slow. Once they appear, navigating through folders and opening files is fast. Also, navigating directly to a folder (\\192.168.1.101\My Music) is fast, even if it's the first connection since a restart. Using \\computerName instead of the IP address gives exactly the same results. Pings return in 1ms. net view \\computerName (or \ipAddress) returns the list of shared folders fast. This makes me suspect an Explorer issue rather than a network issue. Suspecting that the remote computer was being automatically indexed or something, I went into Tools-Folder Options-View and unchecked "Automatically search for network folders and printers," but that made no difference. De-selecting the "Folders" icon near the address bar makes no difference. Adding the IP address and computer name to the hosts file makes no difference. Both computers involved are laptops running Windows XP. Both have WiFi and cable adapters. Mine is not connected via cable. The result is the same whether the target is plugged in to the cable or not (although the IP address changes - 192.168.1.101 over cable, 192.168.1.103 over WiFi.) We are using DHCP assigned by the router.

    Read the article

  • Postfix not working

    - by user1488723
    A while ago I installed the postfix mail server on my ubuntu 10.04 VPS. At the time it was working good but now it's just stopped working. I was trying to enable SASL authentification and somewhere it must have went really wrong. I've studied the postfix main.cf and done everything in an orderly fashion to ensure that it is nothing wrong. I also have Dovecot installed and configured dovecot.conf to run with Postfix. If I try to do telnet localhost 25 while logged in on the server I just get: Connection closed by foreign host. If I try to do telnet mail.example.com 25 "from the outside" I get: telnet: Unable to connect to remote host: No route to host And when I check the server log after the failed attempts I see this: Jun 28 15:49:31 msv postfix/smtpd[11839]: initializing the server-side TLS engine Jun 28 15:49:31 msv postfix/smtpd[11839]: connect from localhost.localdomain[127.0.0.1] Jun 28 15:49:31 msv postfix/smtpd[11839]: warning: SASL: Connect to /var/spool/postfix/private/auth failed: Connection refused Jun 28 15:49:31 msv postfix/smtpd[11839]: fatal: no SASL authentication mechanisms Jun 28 15:49:32 msv postfix/master[11598]: warning: process /usr/lib/postfix/smtpd pid 11839 exit status 1 Jun 28 15:49:32 msv postfix/master[11598]: warning: /usr/lib/postfix/smtpd: bad command startup -- throttling main.cf file looks like this: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no delay_warning_time = 4h myhostname = mail.example.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydomain = example.com myorigin = $mydomain mydestination = $mydomain relayhost = mynetworks = 127.0.0.1 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_use_tls = yes smtpd_tls_loglevel = 2 smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_sasl_auth_enable = yes smtpd_client_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = /var/spool/postfix/private/auth smtpd_sasl_security_options = noanonymous Dovecot.conf file looks like this: protocols = imap imaps disable_plaintext_auth = no log_timestamp = "%b %d %H:%M:%S " ssl = yes ssl_cert_file = /etc/postfix/ssl/smtpd.crt ssl_key_file = /etc/postfix/ssl/smtpd.key mail_location = maildir:~/mail mail_access_groups = mail auth_username_chars = abcdefghijklmnopqrstuvwxyz protocol imap { imap_client_workarounds = delay-newmail tb-extra-mailbox-sep } auth default { mechanisms = plain login passdb pam { } userdb passwd { } socket listen { client { path = /var/spool/postfix/private/auth user = postfix group = postfix mode = 0660 } } }

    Read the article

  • Securely executing system commands as sudo from PHP

    - by Aydin Hassan
    Is it possible? I have written a command line tool in PHP for creating new environments for our company. It creates system users, directories, databases, VHosts and restarts apache, amongst other things. These commands require sudo privileges. I thought it might be a nice idea to have a web-interface for it, to make it easier for other non-developers to use. The web app would be behind authentication. When running from the command line I just run sudo tool.php, obviously I can't do this from a web app. How could I do this securely? Giving the apache user sudo access seems silly, as this would means all sites hosted on the box (eg all our environments) would have sudo access. Is it possible to make this tool run under a different user? this user could have sudo privileges for only the commands I need? How do things like plesk and cPanel do this? Any thoughts?

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • Adding new SPNs to existing service ids

    - by jmh
    We have a tomcat server using spring-security kerberos to authenticate users to the webpage against active directory. There are around 25 domain controllers. The site has two CNAME based DNS aliases. The site currently has one Service ID with SPNs registered for the DNS A record as well as each of the CNAMEs. While everything is working right now, I don't know how to reliably change this configuration without possible downtime. The reason is that clients cache kerberos tickets: http://www.juniper.net/techpubs/en_US/uac4.2/topics/concept/user-role-active-directory-about.html The 'kerbtray.exe' program is helpful for viewing and deleting Kerberos tickets on the endpoint. Old tickets must be purged from the endpoint if SPNs are updated or passwords are changed (assuming the endpoint still has a cached copy of the ticket from a prior SPNEGO request to the MAG Series device. During testing, you should purge tickets before each authentication request. Description of "klist" program used to inspect/delete cached tickets: http://technet.microsoft.com/en-us/library/hh134826.aspx So if each of the clients (users running windows) who connect to my web server have kerberos tickets that become invalid as soon as I update the SPNs or passwords, how do I ensure changes are seamless? Are there any operations that can be done safely? I can't just ask all of the users to install klist and delete their old tickets.

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Access denied error 3221225578 with file sharing to Windows server

    - by Ian Boyd
    i'm trying to access the shares on a server. The credential box appears, and i enter in a correct username and password, and i get access denied. The silly thing is that i can Remote Desktop to the server (using the same credentials), and i can check the Security event log for the access denied errors: Event Type: Failure Audit Event Source: Security Event Category: Account Logon Event ID: 681 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: The logon to account: Administrator by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 from workstation: HARPAX failed. The error code was: 3221225578 and Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 529 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: Logon Failure: Reason: Unknown user name or bad password User Name: Administrator Domain: stalwart Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: HARPAX Looking up the error code (3221225578), i get an article on Technet: Audit Account Logon Events By Randy Franklin Smith ... Table 1 - Error Codes for Event ID 681 Error Code Reason for Logon Failure 3221225578 The username is correct, but the password is wrong. Which would seem to indicate that the username is correct, but the password is wrong. i've tried the password many times, uppercase, lowercase, on different user accounts, with and without prefixing the username with servername\username. What gives that i cannot access the server over file sharing, but i can access it over RDP?

    Read the article

  • Slow boot for OS and external devices

    - by Derek Van Cuyk
    I have been having this problem intermittently but as of yesterday, it has become more consistent. It originally started when I rebooted my PC at home and the OS (Windows 8) sat in a loop appearing to do nothing while loading. I figured since this was a new installation, that something may have just become corrupted and I decided to reinstall. So I tried to boot off of the thumb drive which had the installation iso and encountered pretty much the same issue. Same with the DVD drive. So, I rebooted once again and left it to load the entire night just to see if it ever would and sure enough this morning, Windows had finally loaded. Authentication had the same roblem albeit not quite as long (took about 5 minutes to authenticate). However, once I was in, everything appeared to be working fine and as quick as normal with the exception of when I tried to scan the C drive for any errors, which ran unbearably slow (45 minutes and before I left for work and was not finished scanning a 64GB SSD drive). I mention that I have had this issue but never when loading the OS. Before it occurred when trying to install windows 7 from a different DVD drive than the one I have now. It took me about 3 hours to do it since I had to wait sometimes 30+ min for each step to finish processing. Does anyone have an idea as to what can cause this? I am assuming it is the motherboard since it is responsible for communication with all the devices I'm having issues with but I cannot find anyone else who has had a problem like this and don't want to drop more money on a MB if it isn't the problem. Hardware: Motherboard: Asus M4A78T-E Socket AM3/ AMD 790GX/ Hybrid CrossFireX Hard Drive: Kingston SSDNow V+180 64GB Micro SATA II 3GB/S 1.8 Inch Solid State Drive SVP180S2/64G Optical Drive: Samsung Blu-Ray Combo Internal 12XReadable and DVD-Writable Drive with Lightscribe SH-B123L/BSBP Thanks, Derek

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >