Search Results

Search found 3260 results on 131 pages for 'debian squeeze'.

Page 112/131 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • LDAP change user pass on client

    - by Sean
    I am trying to allow ldap users to change their password on client machines. I have tried pam every which way I can think of /etc/ldap.conf & /etc/pam_ldap.conf, as well. At this point I'm stuck. Client: Ubuntu 11.04 Server: Debian 6.0 The current output is this: sobrien4@T-E700F-1:~$ passwd passwd: Authentication service cannot retrieve authentication info passwd: password unchanged /var/log/auth.log gives this during the command: May 9 10:49:06 T-E700F-1 passwd[18515]: pam_unix(passwd:chauthtok): user "sobrien4" does not exist in /etc/passwd May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: reconnecting to LDAP server... May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server getent passwd |grep sobrien4 (note keeping short since testing with that account, however it outputs all ldap users): sobrien4:Ffm1oHzwnLz0U:10000:12001:Sean O'Brien:/home/sobrien4:/bin/bash getent group shows all ldap groups. /etc/pam.d/common-password (Note this is just the most current, I have tried a lot of different options): password required pam_cracklib.so retry=3 minlen=8 difok=3 password [success=1 default=ignore] pam_unix.so use_authtok md5 password required pam_ldap.so use_authtok password required pam_permit.so Popped open wireshark as well, the server & client are talking. I have the password changing working on the server. I.E. the server that runs slapd, I can log in with the ldap user and change the passwords. I tried copying the working configs from the server initially and no dice. I also tried cloning it, and just changing ip & host, and no go. My guess is that the client is not authorized by ip or hostname to change a pass. Pertaining to the slapd conf, I saw this in a guide and tried it: access to attrs=loginShell,gecos by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read access to * by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read So ldap seems to be working okay, just can't change the password.

    Read the article

  • PXE boot and DHCP server configuration Failing Auto Installation

    - by Harihara Vinayakaram
    I have a ISC DHCP Server installed on Ubuntu 9.10 . I have managed to successfully boot a PXE client , obtain a DHCP address and load the initrd.gz file. But I am facing a vague problem when the debian installer starts up and tries to get a DHCP server The client send a DHCP request and I verified that is the same MAC Address. But I get a DHCP DECLINE (The client declines the address ). It offers all the address in the pool and then there is a DHCP NAK (no more free leases ) I tried using the Option no-ping, and also option one-client-one-lease but it does not help . If I set the client to use a fixed-address then the above problem is not there and the installation proceeds smoothly Can you give me any clues on what should be the DHCP server configuration My dhcpd.conf looks like this { ddns-update-style none; option domain-name "hadoop-myorg.org"; option domain-name-servers 192.168.3.5; default-lease-time 600; max-lease-time 7200; group { filename "pxelinux.0"; next-server 192.168.13.184; host hadoop1 { hardware ethernet 90:e6:ba:d5:53:f8; } } subnet 192.168.13.0 netmask 255.255.255.0 { option routers 10.0.0.254; pool { option domain-name-servers 192.168.3.5; max-lease-time 3000; range 192.168.13.55 192.168.13.65; deny unknown-clients; } } }

    Read the article

  • Sun-JRE on CentOS-4.8 RPM error: post-install scriptlet failed, exit status 5

    - by Emyr
    I have a server with CentOS 4.8 installed. The provided is rubbish, but there's only a few months left, and they're busy being sued by Chase bank, so I doubt I can get CentOS 5. I wiped the server clean using Virtuozzo, and found that the default image is VERY empty. I even had to install yum myself. I've reached the point where I want to install TomCat. I downloaded the Sun JRE as a .rpm.bin file, did chmod a+x and ran it. That produced a .rpm file, which I tried installing: [root@host java]# rpm -Uvh jre-6u20-linux-i586.rpm Preparing... ########################################### [100%] 1:jre ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... localedata.jar... plugin.jar... javaws.jar... deploy.jar... error: %post(jre-1.6.0_20-fcs.i586) scriptlet failed, exit status 5 [root@host java]# rpm -evv jre-6u20-linux-i586.rpm D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Name rdonly mode=0x0 error: package jre-6u20-linux-i586.rpm is not installed D: closed db index /var/lib/rpm/Name D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages [root@host java]# rpm -qi --scripts jre-6u20-linux-i586.rpm package jre-6u20-linux-i586.rpm is not installed [root@host java]# I couldn't find any results on Google for any parts of that error message, and I have very little experience of rpm (I usually use Debian). Is this a broken package, or am I missing something or some setting?

    Read the article

  • Login box not shown on Ubuntu

    - by Alexandre
    I've installed Ubuntu 10.04 (64bit) as a guest OS in VirtualBox, using Windows7 Professional (64bit) as host. After Ubuntu install, I did installed Xfce4 (sudo apt-get install xfce4). Logged in using a Xfce session, and when I logged out, I couldn't see the login box anymore, only the regular gnome background from login screen. Then I restarted the virtual machine, and now I'm not able to see the login box anymore, only the gnome background. Does someone knows how to solve this? Thanks in advance Update: I've tried to use Xubuntu, that comes with Xfce. And I'm facing the same problem. As a common denominator from the two cases, now I see the problem arises after I've updated the system, and then installed curl (via apt-get), zlib (make process), git (make process). But I'm not sure this can be the cause for this ... Update: I isolated the problem. The bad guy is zlib, but I don't have a clue why. In fact, it does crash not only ubuntu and xubuntu, but also Debian (Yes, I've tested all). I'll keep this question in case someone have the same problem, but I think my initial question is not valid anymore.

    Read the article

  • VMWare Converter - Remote Linux Install P2V

    - by Zed Said
    I am trying to get what I thought was an easy process working. Here is my situation. I have a remote linux server (running Debian) that I would like to turn into a VM so that I can do testing on it rather than the production server. I downloaded VMWare Converter. I went to the "convert machine" wizard, chose Powered-on machine, entered my remote machine details. Clicking next shows that it correctly connects to my remote linux box. Now, on the next screen, for Destination type, it enters "WMWare Infrastructure virtual machine", and I can't change this. This is where I am stuck. Why do I need another sever to convert with? Is this where my VM gets sent to? And why can't I just convert the VM and save it on the computer that I am running the converter program on? If I do have to use a vmware infrastructure destination, I am confused at what ESX is, why and how I use it. Any help with this process would be more than appreciated!

    Read the article

  • Apache2 config problem

    - by Hellnar
    For using my Debian VPS for multiple domains , I did such actions: removed the default one from sites-enabled/ and sites-available/ (config and the symbolic link) and I added this under sites-available/www.mysite.com : <VirtualHost MYIP:80> ServerName mysite.com ServerAlias www.mysite.com Alias /media/ /home/myuser/mysite/media/ Alias /admin_media/ /home/myuser/django/Django-1.2/django/contrib/admin/media/ WSGIScriptAlias / /home/myuser/mysite/wsgi.py ErrorLog /home/myuser/mysite/logs/error.log CustomLog /home/myuser/mysite/logs/access.log combined </VirtualHost> And I have changed my ports.conf to: NameVirtualHost MYIP:80 Listen 80 <IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here Listen 443 </IfModule> Lastly I enabled the new domain via the command: a2ensite www.mysite.com After restart I get this error: myuser:~# /etc/init.d/apache2 restart Restarting web server: apache2apache2: Syntax error on line 281 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/sites-enabled/www.birertek.com: /etc/apache2/sites-enabled/www.birertek.com:1: <VirtualHost> was not closed. failed! Please help this poor soul.

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • Can a wifi AP act as a client, and a server at the same time?

    - by nbolton
    I feel this is SF worthy (as opposed to SU) as I go into a bit of detail on gateways/routing. Here's my ideal setup (if possible) -- there is a wifi network (lets call it bob's) with which I want access to, but I have a few other computers on my network which I want to keep behind a firewall. So I was thinking of buying a wireless access point so that I could set it up to connect to bob's network from the AP, and then from my server, connect to the AP via ethernet. So that's the first bit. Second part is that I want to have my own private wifi network off the back of this; can I then tell the AP to serve a new network called foobar. When I say private network, I mean that my server is actually a Debian linux install with routing configured (and I also do some QoS stuff on, etc). So ideally, I'd like all the clients on the private network to be behind the server in terms of routing. However, if the private clients connect to the server via wifi, then aren't they exposed to the "public" network? That is, if someone is savvy enough to scan for my IP range. Also, to do routing I'd need to connect two ethernet cables between the server and the AP (because you can't do routing/QoS on virtual devices) -- which isn't a problem really; but I'm not sure whether the AP will allow me to separate the public and private LANs. Or, as well as the AP, am I better getting a wifi-to-ethernet adapter for the server? I could use a wifi usb, but this can be tricky to set up on headless linux; plus the signal strength is a bit lousy. If this question is a bit vague/spurious in places, please comment and I will explain in more detail.

    Read the article

  • bind9 "error sending response: host unreachable"

    - by wolfgangsz
    of course), I have a number of DNS servers, all running bind9 (9.5.1, to be specific) under fedora. 4 of them are slaves, fed by a common master for our public DNS. These are all located on the public gateways of our various offices. One of them has tons of messages in its log files similar to these: Jul 21 17:26:18 gateway named[3487]: client 10.171.3.8#52500: view internal: error sending response: host unreachable I wonder where that comes from. The firewall is open on port 53 between the two machines (10.171.3.8 is an internal DNS server located on a Windows Domain Controller). The internal domains do NOT list the gateway as a name server (so there should not be any attempts of replicating the domains), and the gateway does not handle any internal DNS. The clients in these messages vary between the two domain controllers on the internal network and a third internal name server (running bind9 on debian in a different segment of the network). Any pointers are highly welcome. In response to the first reply: The issue with this really is that tcpdump doesn't show any problems. Here is an extract from "tcpdump -i any port 53" 09:13:38.283308 IP valine.aminocom.com.61815 ns-pri.ripe.net.domain: 14075 PTR? 166.225.58.95.in-addr.arpa. (44) 09:13:42.007410 IP gateway-eng.aminocom.com.37047 alanine.aminocom.com.domain: 35410+ PTR? 12.3.172.10.in-addr.arpa. (42) At the same time, the DNS log shows: Jul 22 09:13:38 gateway named[3487]: client 10.171.3.6#61300: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.172.3.12#56230: view internal: error sending response: host unreachable Jul 22 09:13:40 gateway named[3487]: client 10.171.3.8#55221: view internal: error sending response: host unreachable Jul 22 09:13:49 gateway named[3487]: client 10.171.3.8#51342: view internal: error sending response: host unreachable So clearly at 09:13:40 there were two unsuccessful attempts to connect to internal machines (10.172.3.12 and 10.171.3.8, both are DNS servers), but nothing in the tcpdump output.

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • Problem while Installing mysql cluster

    - by Champion
    I downloaded latest mysql cluster file 64 bit (mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23) from mysql site. Ran following command to install mysql server. /home/mysql_cluster/mysqlc/scripts/mysql_install_db --no-defaults --basedir=/home/mysql_cluster/mysqlc --datadir=/home/mysql_cluster/my_cluster/mysqld_data It's giving following issues. /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Neither host 'db' nor 'localhost' could be looked up with /home/mysql_cluster/mysqlc/bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option My host is pinging and hostname command gives correct hostname. I tried also with --force option force but still not working giving following issues: /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Installing MySQL system tables... /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 393: /home/mysql_cluster/mysqlc/bin/mysqld: No such file or directory Installation of system tables failed! Examine the logs in /home/mysql_cluster/my_cluster/mysqld_data for more information. It says above files doesn't exist but this files are present checked with ls command. We are using XEN VM with 64 bit debian lenny os. How this issue can be resolved ?

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Unable to receive any emails using postfix, dovecot, mysql, and virtual domain/mailboxes

    - by stkdev248
    I have been working on configuring my mail server for the last couple of weeks using postfix, dovecot, and mysql. I have one virtual domain and a few virtual mailboxes. Using squirrelmail I have been able to log into my accounts and send emails out (e.g. I can send to googlemail just fine), however I am not able to receive any emails--not from the outside world nor from within my own network. I am able to telnet in using localhost, my private ip, and my public ip on port 25 without any problems (I've tried it from the server itself and from another computer on my network). This is what I get in my logs when I send an email from my googlemail account to my mail server: mail.log Apr 14 07:36:06 server1 postfix/qmgr[1721]: BE01B520538: from=, size=733, nrcpt=1 (queue active) Apr 14 07:36:06 server1 postfix/pipe[3371]: 78BC0520510: to=, relay=dovecot, delay=45421, delays=45421/0/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied) Apr 14 07:36:06 server1 postfix/pipe[3391]: 8261B520534: to=, relay=dovecot, delay=38036, delays=38036/0.06/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3378]: 63927520532: to=, relay=dovecot, delay=38105, delays=38105/0.02/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3375]: 07F65520522: to=, relay=dovecot, delay=39467, delays=39467/0.01/0/0.17, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3381]: EEDE9520527: to=, relay=dovecot, delay=38361, delays=38360/0.04/0/0.15, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3379]: 67DFF520517: to=, relay=dovecot, delay=40475, delays=40475/0.03/0/0.16, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3387]: 3C7A052052E: to=, relay=dovecot, delay=38259, delays=38259/0.05/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:06 server1 postfix/pipe[3394]: BE01B520538: to=, relay=dovecot, delay=37682, delays=37682/0.07/0/0.11, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:36:07 server1 postfix/pipe[3384]: 3C7A052052E: to=, relay=dovecot, delay=38261, delays=38259/0.04/0/1.3, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection rate 1/60s for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max connection count 1 for (smtp:209.85.213.169) at Apr 14 07:35:32 Apr 14 07:39:23 server1 postfix/anvil[3368]: statistics: max cache size 1 at Apr 14 07:35:32 Apr 14 07:41:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:41:06 server1 postfix/pipe[4594]: ED6005203B7: to=, relay=dovecot, delay=334, delays=334/0.01/0/0.13, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) Apr 14 07:51:06 server1 postfix/qmgr[1721]: ED6005203B7: from=, size=1463, nrcpt=1 (queue active) Apr 14 07:51:06 server1 postfix/pipe[4604]: ED6005203B7: to=, relay=dovecot, delay=933, delays=933/0.02/0/0.12, dsn=4.3.0, status=deferred (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ) mail-dovecot-log (the log I set for debugging): Apr 14 07:28:26 auth: Info: mysql(127.0.0.1): Connected to database postfixadmin Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): query: SELECT password FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: client out: OK 1 [email protected] Apr 14 07:28:26 auth: Debug: master in: REQUEST 1809973249 3356 1 7cfb822db820fc5da67d0776b107cb3f Apr 14 07:28:26 auth: Debug: sql([email protected],127.0.0.1): SELECT '/home/vmail/mydomain.com/some.user1' as home, 5000 AS uid, 5000 AS gid FROM mailbox WHERE username = '[email protected]' Apr 14 07:28:26 auth: Debug: master out: USER 1809973249 [email protected] home=/home/vmail/mydomain.com/some.user1 uid=5000 gid=5000 Apr 14 07:28:26 imap-login: Info: Login: user=, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=3360, secured Apr 14 07:28:26 imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/home/vmail/mydomain.com/some.user1 Apr 14 07:28:26 imap([email protected]): Debug: maildir++: root=/home/vmail/mydomain.com/some.user1/Maildir, index=/home/vmail/mydomain.com/some.user1/Maildir/indexes, control=, inbox=/home/vmail/mydomain.com/some.user1/Maildir Apr 14 07:48:31 imap([email protected]): Info: Disconnected: Logged out bytes=85/681 From the output above I'm pretty sure that my problems all stem from (temporary failure. Command output: Can't open log file /var/log/mail-dovecot.log: Permission denied ), but I have no idea why I'm getting that error. I've have the permissions to that log set just like the other mail logs: root@server1:~# ls -l /var/log/mail* -rw-r----- 1 syslog adm 196653 2012-04-14 07:58 /var/log/mail-dovecot.log -rw-r----- 1 syslog adm 62778 2012-04-13 21:04 /var/log/mail.err -rw-r----- 1 syslog adm 497767 2012-04-14 08:01 /var/log/mail.log Does anyone have any idea what I may be doing wrong? Here are my main.cf and master.cf files: main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = server1.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all # Virtual Configs virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = mysql:/etc/postfix/mysql_virtual_mailbox_domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cf relay_domains = mysql:/etc/postfix/mysql_relay_domains.cf smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unauth_destination, reject_unauth_pipelining, reject_invalid_hostname smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous virtual_transport=dovecot dovecot_destination_recipient_limit = 1 master.cf: # # Postfix master process configuration file. For details on the format # of the file, see the master(5) manual page (command: "man 5 master"). # # Do not forget to execute "postfix reload" after editing this file. # # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #smtp inet n - - - 1 postscreen #smtpd pass - - - - - smtpd #dnsblog unix - - - - 0 dnsblog #tlsproxy unix - - - - 0 tlsproxy #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # # Many of the following services use the Postfix pipe(8) delivery # agent. See the pipe(8) man page for information about ${recipient} # and other message envelope options. # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # Recent Cyrus versions can use the existing "lmtp" master.cf entry. # # Specify in cyrus.conf: # lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4 # # Specify in main.cf one or more of the following: # mailbox_transport = lmtp:inet:localhost # virtual_transport = lmtp:inet:localhost # # ==================================================================== # # Cyrus 2.1.5 (Amos Gouaux) # Also specify in main.cf: cyrus_destination_recipient_limit=1 # #cyrus unix - n n - - pipe # user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user} # # ==================================================================== # Old example of delivery via Cyrus. # #old-cyrus unix - n n - - pipe # flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -d ${recipient}

    Read the article

  • How do I install the pdo_mysql driver on Red Hat Enterprise Linux 6.1?

    - by Will Martin
    I have a RHEL box running PHP 5.3.3, which was installed using the binary packages provided by yum. I have installed the php-pdo package: # yum info php-pdo Loaded plugins: product-id, rhnplugin, subscription-manager Updating Red Hat repositories. Installed Packages Name : php-pdo Arch : x86_64 Version : 5.3.3 Release : 3.el6_1.3 Size : 168 k Repo : installed From repo : rhel-x86_64-server-6 Summary : A database access abstraction module for PHP applications URL : http://www.php.net/ License : PHP Description : The php-pdo package contains a dynamic shared object that will add : a database access abstraction layer to PHP. This module provides : a common interface for accessing MySQL, PostgreSQL or other : databases. It appears to be working correctly for SQLite databases, but not MySQL. There's no file including pdo_mysql.so in /etc/php.d, and there is no copy of pdo_mysql.so in /usr/lib64/php/modules. I'm pretty sure I just need the driver file and a line in the PHP configuration. A yum search pdo mysql didn't turn up any useful packages, and Google has failed me. If I were on Ubuntu or Debian, I'd apt-get install php5-mysql and be done with it. So ... where in Red Hat land do I get a copy of pdo_mysql.so, and install it properly?

    Read the article

  • Vserver: secure mails from a hacked webservice

    - by lukas
    I plan to rent and setup a vServer with Debian xor CentOS. I know from my host, that the vServers are virtualized with linux-vserver. Assume there is a lighthttpd and some mail transfer agent running and we have to assure that if the lighthttpd will be hacked, the stored e-mails are not readable easily. For me, this sounds impossible but may I missed something or at least you guys can validate the impossibility... :) I think basically there are three obvious approaches. The first is to encrypt all the data. Nevertheless, the server would have to store the key somewhere so an attacker (w|c)ould figure that out. Secondly one could isolate the critical services like lighthttpd. Since I am not allowed to do 'mknod' or remount /dev in a linux-vserver, it is not possible to setup a nested vServer with lxc or similar techniques. The last approach would be to do a chroot but I am not sure if it would provide enough security. Further I have not tried yet, if I am able to do a chroot in a linux-vserver...? Thanks in advance!

    Read the article

  • An MCM exam, Rob? Really?

    - by Rob Farley
    I took the SQL 2008 MCM Knowledge exam while in Seattle for the PASS Summit ten days ago. I wasn’t planning to do it, but I got persuaded to try. I was meaning to write this post to explain myself before the result came out, but it seems I didn’t get typing quickly enough. Those of you who know me will know I’m a big fan of certification, to a point. I’ve been involved with Microsoft Learning to help create exams. I’ve kept my certifications current since I first took an exam back in 1998, sitting many in beta, across quite a variety of topics. I’ve probably become quite good at them – I know I’ve definitely passed some that I really should’ve failed. I’ve also written that I don’t think exams are worth studying for. (That’s probably not entirely true, but it depends on your motivation. If you’re doing learning, I would encourage you to focus on what you need to know to do your job better. That will help you pass an exam – but the two skills are very different. I can coach someone on how to pass an exam, but that’s a different kind of teaching when compared to coaching someone about how to do a job. For example, the real world includes a lot of “it depends”, where you develop a feel for what the influencing factors might be. In an exam, its better to be able to know some of the “Don’t use this technology if XYZ is true” concepts better.) As for the Microsoft Certified Master certification… I’m not opposed to the idea of having the MCM (or in the future, MCSM) cert. But the barrier to entry feels quite high for me. When it was first introduced, the nearest testing centres to me were in Kuala Lumpur and Manila. Now there’s one in Perth, but that’s still a big effort. I know there are options in the US – such as one about an hour’s drive away from downtown Seattle, but it all just seems too hard. Plus, these exams are more expensive, and all up – I wasn’t sure I wanted to try them, particularly with the fact that I don’t like to study. I used to study for exams. It would drive my wife crazy. I’d have some exam scheduled for some time in the future (like the time I had two booked for two consecutive days at TechEd Australia 2005), and I’d make sure I was ready. Every waking moment would be spent pouring over exam material, and it wasn’t healthy. I got shaken out of that, though, when I ended up taking four exams in those two days in 2005 and passed them all. I also worked out that if I had a Second Shot available, then failing wasn’t a bad thing at all. Even without Second Shot, I’m much more okay about failing. But even just trying an MCM exam is a big effort. I wouldn’t want to fail one of them. Plus there’s the illusion to maintain. People have told me for a long time that I should just take the MCM exams – that I’d pass no problem. I’ve never been so sure. It was almost becoming a pride-point. Perhaps I should fail just to demonstrate that I can fail these things. Anyway – boB Taylor (@sqlboBT) persuaded me to try the SQL 2008 MCM Knowledge exam at the PASS Summit. They set up a testing centre in one of the room there, so it wasn’t out of my way at all. I had to squeeze it in between other commitments, and I certainly didn’t have time to even see what was on the syllabus, let alone study. In fact, I was so exhausted from the week that I fell asleep at least once (just for a moment though) during the actual exam. Perhaps the questions need more jokes, I’m not sure. I knew if I failed, then I might disappoint some people, but that I wouldn’t’ve spent a great deal of effort in trying to pass. On the other hand, if I did pass I’d then be under pressure to investigate the MCM Lab exam, which can be taken remotely (therefore, a much smaller amount of effort to make happen). In some ways, passing could end up just putting a bunch more pressure on me. Oh, and I did.

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • What does this ssh error mean?

    - by kevin
    This is my last resort. I've been trying to figure out the problem here for hours. Here's the deal: I have copied my private key from machine #1 onto machine #2. Machine #1 is able to connect via ssh to a server with my public key just fine, but machine #2 gives the following output, when trying to connect to the server: $ ssh -vvv -i /home/kevin/.ssh/kev_rsa [email protected] -p 22312 OpenSSH_5.3p1 Debian-3ubuntu6, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.244 [192.168.1.244] port 22312. debug1: Connection established. debug3: Not a RSA1 key file /home/kevin/.ssh/kev_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace ... Permission denied (publickey). There is obviously more debug output that I have omitted, and I can provide upon request. I am convinced however that it doesn't like my private key file. I also had a suspicion that it has to do with how I copied it from machine #1 to machine #2. I copy/pasted the text from the private key onto a flash drive. This might be the problem, however, when I duplicated this method on another working private key file, and did a diff on the original, to the copy/pasted one, they are identical. I've been struggling with this. If I could just get a little more information on why it doesn't like my key, I could fix it I'm sure. Anyone have any ideas on this? Is there some meta-data somewhere that tells ssh that a file is in fact an RSA key?

    Read the article

  • Why is mkfs overwriting the LUKS encryption header on LVM on RAID partitions on Ubuntu 12.04?

    - by Starchy
    I'm trying to setup a couple of LUKS-encrypted partitions to be mounted after boot-time on a new Ubuntu server which was installed with LVM on top of software RAID. After running cryptsetup luksFormat, the LUKS header is clearly visible on the volume. After running any flavor of mkfs, the header is overwritten (which does not happen on other systems that were setup without LVM), and cryptsetup will no longer recognize the device as a LUKS device. # cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/dm-1 WARNING! ======== This will overwrite data on /dev/dm-1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: # hexdump -C /dev/dm-1|head -n5 00000000 4c 55 4b 53 ba be 00 01 61 65 73 00 00 00 00 00 |LUKS....aes.....| 00000010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000020 00 00 00 00 00 00 00 00 63 62 63 2d 65 73 73 69 |........cbc-essi| 00000030 76 3a 73 68 61 32 35 36 00 00 00 00 00 00 00 00 |v:sha256........| 00000040 00 00 00 00 00 00 00 00 73 68 61 31 00 00 00 00 |........sha1....| # cryptsetup luksOpen /dev/dm-1 web2-var # mkfs.ext4 /dev/mapper/web2-var [..snip..] Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done # hexdump -C /dev/dm-1|head -n5 # cryptsetup luksClose /dev/mapper/web2-var 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000400 00 40 5d 00 00 88 74 01 66 a0 12 00 17 f2 6d 01 |.@]...t.f.....m.| 00000410 f5 3f 5d 00 00 00 00 00 02 00 00 00 02 00 00 00 |.?].............| 00000420 00 80 00 00 00 80 00 00 00 20 00 00 00 00 00 00 |......... ......| # cryptsetup luksOpen /dev/dm-1 web2-var Device /dev/dm-1 is not a valid LUKS device. I have also tried mkfs.ext2 with the same result. Based on setups I've done successfully on Debian and Ubuntu (but not LVM or Ubuntu 12.04), it's hard to see why this is failing.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >