Search Results

Search found 56787 results on 2272 pages for 'jacob lehrbaum@oracle com'.

Page 921/2272 | < Previous Page | 917 918 919 920 921 922 923 924 925 926 927 928  | Next Page >

  • Postfix SMTP auth not working with virtual mailboxes + SASL + Courier userdb

    - by Greg K
    So I've read a variety of tutorials and how-to's and I'm struggling to make sense of how to get SMTP auth working with virtual mailboxes in Postfix. I used this Ubuntu tutorial to get set up. I'm using Courier-IMAP and POP3 for reading mail which seems to be working without issue. However, the credentials used to read a mailbox are not working for SMTP. I can see from /var/log/auth.log that PAM is being used, does this require a UNIX user account to work? As I'm using virtual mailboxes to avoid creating user accounts. li305-246 saslauthd[22856]: DEBUG: auth_pam: pam_authenticate failed: Authentication failure li305-246 saslauthd[22856]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] /var/log/mail.log li305-246 postfix/smtpd[27091]: setting up TLS connection from mail-pb0-f43.google.com[209.85.160.43] li305-246 postfix/smtpd[27091]: Anonymous TLS connection established from mail-pb0-f43.google.com[209.85.160.43]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) li305-246 postfix/smtpd[27091]: warning: SASL authentication failure: Password verification failed li305-246 postfix/smtpd[27091]: warning: mail-pb0-f43.google.com[209.85.160.43]: SASL PLAIN authentication failed: authentication failure I've created accounts in userdb as per this tutorial. Does Postfix also use authuserdb? What debug information is needed to help diagnose my issue? main.cf: # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # SMTP parameters smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtp_tls_security_level = may smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom /etc/postfix/sasl/smtpd.conf pwcheck_method: saslauthd mech_list: plain login /etc/default/saslauthd START=yes PWDIR="/var/spool/postfix/var/run/saslauthd" PARAMS="-m ${PWDIR}" PIDFILE="${PWDIR}/saslauthd.pid" DESC="SASL Authentication Daemon" NAME="saslauthd" MECHANISMS="pam" MECH_OPTIONS="" THREADS=5 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd" /etc/courier/authdaemonrc authmodulelist="authuserdb" I've only modified one line in authdaemonrc and restarted the service as per this tutorial. I've added accounts to /etc/courier/userdb via userdb and userdbpw and run makeuserdb as per the tutorial. SOLVED Thanks to Jenny D for suggesting use of rimap to auth against localhost IMAP server (which reads userdb credentials). I updated /etc/default/saslauthd to start saslauthd correctly (this page was useful) MECHANISMS="rimap" MECH_OPTIONS="localhost" THREADS=0 OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd -r" After doing this I got the following error in /var/log/auth.log: li305-246 saslauthd[28093]: auth_rimap: unexpected response to auth request: * BYE [ALERT] Fatal error: Account's mailbox directory is not owned by the correct uid or gid: li305-246 saslauthd[28093]: do_auth : auth failure: [user=fred] [service=smtp] [realm=] [mech=rimap] [reason=[ALERT] Unexpected response from remote authentication server] This blog post detailed a solution by setting IMAP_MAILBOX_SANITY_CHECK=0 in /etc/courier/imapd. Then restart your courier and saslauthd daemons for config changes to take effect. sudo /etc/init.d/courier-imap restart sudo /etc/init.d/courier-authdaemon restart sudo /etc/init.d/saslauthd restart Watch /var/log/auth.log while trying to send email. Hopefully you're good!

    Read the article

  • I'm confuse about these 2 statements about performance of 32 bit application on 64 bit Windows 7....

    - by metal gear solid
    Running some 32 bit applications on a 64 bit OS could actually be slower. The additional overheads in running 32 bit software in 64 bit mode could cause a slight degradation in performance. It will take some time for 64 bit software to become the norm. Source: http://www.w7forums.com/windows-7-64-bit-vs-32-bit-t484.html That depends. If you're working with large files or running applications that consume a great deal of memory, then 64-bit Windows will typically give you a slight performance advantage over 32-bit Windows running on identical hardware. This is true even when using 32-bit applications. Source: http://www.infoworld.com/d/windows/32-bit-windows-7-or-64-bit-windows-7-145?page=0,3 Which is true? If i go for 64 bit Windows 7 then will i feel more performance (Compare to 32 bit windows 7) of 3 years back purchased Adobe photoshop (I think it would be a 32 bit application) and some of other 32 bit applications ?

    Read the article

  • HAproxy - Redirect issue - Uri Variables ?

    - by Justin
    I'm using haproxy 1.5dev3 and I was wondering if there is any possible way to grab uri variables from a request to reappend the query on the end of a redirect url? What I'm trying to do is redirect from: http://www.domain.com/page/example.htm?id=1234567 to: http://www.domain.com/frame/newpage.cfm?id=1234567 redirect prefix doesn't work properly as it tries to append /page/example.htm to the end of the redirect url. Can I do some sort of rewrite to accomplish this? It would be awesome if you could use uri and queries as variables for redirection/pool selection like on F5. Please help...Thanks!

    Read the article

  • .php files see Apache environment variables, .html files don't

    - by dotancohen
    Consider this environment: $ cat .htaccess AddType application/x-httpd-php .php .html SetEnv Foo Bar $ cat test.php <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> $ cat test.html <?php echo "Hello: "; echo $_SERVER['Foo']; echo $_ENV['Foo']; echo getenv('Foo'); ?> This is the output of test.php: Hello BarBarBar This is the output of test.html: Hello Why might that be? How might I fix it? Here is phpinfo.php: http://pastebin.com/rgq7up61 Here is phpinfo.html: http://pastebin.com/VUKFNZ36 If anyone knows where I can host a real webpage instead of just the HTML for one, please let me know and I'll move the content to there. Thanks.

    Read the article

  • How to host multiple mail domains with courier?

    - by Dave Vogt
    I had my server set up with generic aliases in /etc/courier/aliases/users, which worked fine. But today I wanted to host some new domains, where addresses overlap. What I need is that [email protected] goes to account "dv", but dave@example.com goes to account "d2". So I set up a new file containing fully qualified addresses ([email protected]: dv) instead of the previous dave: dv. But somehow, courier-smtpd doesn't accept mail to these addresses anymore. makealiases -dump prints all the aliases the way they should be.. so i'm a bit stuck.

    Read the article

  • Apache2 mod_proxy and post-multipart size

    - by Pietro
    Hi, I have Apache2 configured to proxy all traffic directed to a specific virtual host to a local tomcat instance. All is good and fine but for multipart posts larger than ~100kb. Such posts fail on the tomcat end with an exception like SocketTimeoutException. If I connect directly to Tomcat (which listens on a port != 80) then all posts are handled just fine. The Apache virtual host config goes like this: NameVirtualHost * SetOutputFilter DEFLATE <VirtualHost *> ServerName foo.bar.com ErrorLog c:/wamp/logs/foo_error.log CustomLog c:/wamp/logs/foo_access.log combined ProxyTimeout 60 ProxyPass / http://localhost:10080/foo/ ProxyPassReverse / http://localhost:10080/foo/ ProxyPassReverseCookieDomain localhost bar.com ProxyPassReverseCookiePath /foo / </VirtualHost> I tried browsing the Apache2 and mod_proxy docs but found nothing useful. Any idea why Apache2 refuses to proxy requests bigger than X bytes ? Thanks!

    Read the article

  • How to block subreddits with BIND9?

    - by user1391189
    Please help me block NSFW subreddits like this one (http://www.reddit.com/r/NSFW/) I would like to keep access to SFW subreddits, but block certain subreddits that are distracting or NSFW. I know how to filter domains. (see files below) But how do I apply the filter only to certain subreddits? So far I have set up the following files: blocklist.conf zone "adimages.go.com" { type master; file "dummy-block"; }; zone "admonitor.net" { type master; file "dummy-block"; }; zone "ads.specificpop.com" { type master; file "dummy-block"; }; ... named.conf options { allow-query { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; directory "c:\bind\etc"; notify no; }; zone "." IN { type hint; file "c:\bind\etc\named.root"; }; zone "localhost" IN { allow-update { none; }; file "c:\bind\etc\localhost.zone"; type master; }; zone "0.0.127.in-addr.arpa" IN { allow-update { none; }; file "c:\bind\etc\named.local"; type master; }; key "rndc-key" { algorithm hmac-md5; secret "O5VdbBKKEMzuLYjM60CxwuLLURFA6peDYHCBvZCqjoa6KtL1ggD7OTLeLtnu2jR5I5cwA/MQ8UdHc+9tMJRSiw=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; //Blocklist include "c:\bind\etc\blocklist.conf"; dummy-block $TTL 604800 @ IN SOA localhost. root.localhost. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS localhost. @ IN A 127.0.0.1 * IN A 127.0.0.1

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • Apache configuration for silent rewrite of query strings like codeigniter

    - by jwir3
    Codeigniter rewrites query strings like the following: http://somedomain.com/index.php?q=something to become: http://somedomain.com/index.php/q/something I'd like to imitate this behavior on a (very) lightweight website I am developing for a wedding RSVP system. I don't want the bloat of codeigniter, nor do I need anything else that it provides. The only thing I'd like is this. Unfortunately, I don't know how to setup a mod-rewrite rule that accomplishes this. I can setup a rule that translates /q/something into ?q=something, but I can't get one that does that without changing the URL the user is viewing. I'm basically looking for something that is, in effect, a "silent" version of the rewrite. That is, I want something that rewrites q/something to ?q=something, but leaves the user's URL in their address bar as q/something. Thanks in advance!

    Read the article

  • Cannot access a very specific site from my router

    - by DJDarkViper
    This is a problem for me because this site is important to me. It's MY website. And sadly my email is hosted on my site (which I cant access either) When I try to access my website when connected to my Linksys E3000 router, these days it simply just doesn't go through. When I ping it, its all Request Timed Out, and when I tracert C:\Users\Kyle>tracert blackjaguarstudios.com Tracing route to blackjaguarstudios.com [199.188.204.228] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms CISCO26565 [192.168.1.1] 2 16 ms 15 ms 11 ms 11.4.64.1 3 11 ms 9 ms 11 ms rd1cs-ge1-2-1.ok.shawcable.net [64.59.169.2] 4 20 ms 21 ms 22 ms 66.163.76.98 5 37 ms 36 ms 35 ms rc1nr-tge0-9-2-0.wp.shawcable.net [66.163.77.54] 6 112 ms 84 ms 85 ms rc2ch-pos9-0.il.shawcable.net [66.163.76.174] 7 86 ms 89 ms 90 ms rc4as-ge12-0-0.vx.shawcable.net [66.163.64.46] 8 90 ms 84 ms 85 ms eqix.xe-3-3-0.cr2.iad1.us.nlayer.net [206.223.115.61] 9 97 ms 97 ms 99 ms xe-3-3-0.cr1.atl1.us.nlayer.net [69.22.142.105] 10 128 ms 128 ms 126 ms ae1-40g.ar1.atl1.us.nlayer.net [69.31.135.130] 11 101 ms 97 ms 96 ms as16626.xe-2-0-5-102.ar1.atl1.us.nlayer.net [69.31.135.46] 12 100 ms 97 ms 197 ms 6509-sc1.abstractdns.com [207.210.114.166] 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out. 17 * * * Request timed out. 18 * * * Request timed out. 19 * * * Request timed out. 20 * * * Request timed out. 21 * * * Request timed out. 22 * * * Request timed out. 23 * * * Request timed out. 24 * * * Request timed out. 25 * * * Request timed out. 26 * * * Request timed out. 27 * * * Request timed out. 28 * * * Request timed out. 29 * * * Request timed out. 30 * * * Request timed out. Trace complete. C:\Users\Kyle> SHAW Cable being my ISP. Figuring this was all something to do with some setting I made on the router, I reset the thing back to factory defaults. Nope. So I'm at a bit of a loss what to do here, as NO device (Computers, Laptops, Tablets, Phones, PS3/ 360, etc) can access my site or its features, so it's not just my computer either. But every other site is just fine. When I connect to my neighbors router, the site comes up just fine. And shes with SHAW as well. What should I do?!

    Read the article

  • Why would ssh fail to expand %h variable in .ssh/config?

    - by Zoran
    Why would ssh fail to expand %h from .ssh/config? This used to work and still works except on a RHEL box. Looking for what the origin of this could be. Is there a setting somewhere that tells ssh to not expand %h? I have something like this in my .ssh/config: Host *.foo HostName %h.mydomain.com On the RHEL box where this doesn't work, I get this: $ ssh -vvvv bar.foo OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /home/zsimic/.ssh/config debug1: Applying options for *.foo debug1: Applying options for *.foo debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 ssh: Could not resolve hostname %h.mydomain.com: Name or service not known

    Read the article

  • Cannot create a new domain in an existing active directory forest

    - by Mackenzie Carr
    I have a domain controller setup on Windows Server 2008 R2 (Forest) and I have another Windows Server 2008 R2 (New Domain) and I want to create a new domain in an existing forest. I get the following error: An Active Directory domain controller for the domain mackdev.mackenziecarr.com could not be contacted The error was "no records found for the given DNS query" The query was for the SRV record for: _ldap._tcp.dc._msdcs.mackdev.mackenziecarr.com I've seem to have tried everything even tried adding this record to the DNS server of the primary forest. I even successfully joined this server to the domain without any issues but trying to create a new domain under the existing forest is no luck. The primary forest I.P. address is 192.168.2.20 the server that I am using to try to make a child domain is 192.168.2.21 My ipconfig are as follows: I.P. Address: 192.168.2.21 Subnetmask: 255.255.255.0 Gateway: 192.168.2.1 Primary DNS: 192.168.2.20

    Read the article

  • Folder Redirection - Explorer requires manual refresh

    - by Pete
    Hello, I am having an issue where - when a users my documents folder is redirected to a DFS share, windows explorer requires a manual refresh after creating a new folder, file, etc. (Interestingly, not when making a new briefcase) I have tried a number of MS knowledge base articles, a hot-fix and a registry change, all with no success. (( http:// support.microsoft.com/?kbid=823291 ; http:// support.microsoft.com/kb/873392 )) The problem only occurs when going through the my documents icon. If I map a home drive for the user to the exact same location (IE - H:\DFS\user\documents) , open that drive and make new folders, then there is no problem. Mapping my documents to H:\ also resolves the issue, however, as we need folder sync and people logging on off site with cached profiles this is not a workable solution (as H: will not map and there will be no access to their docs.) Has anyone managed to figure a fix for this? Thanks, Pete.

    Read the article

  • Configuring jdbc-pool (tomcat 7)

    - by john
    i'm having some problems with tomcat 7 for configuring jdbc-pool : i`ve tried to follow this example: http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency so i have: conf/server.xml <GlobalNamingResources> <Resource type="javax.sql.DataSource" name="jdbc/DB" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb" username="user" password="password" /> </GlobalNamingResources> conf/context.xml <Context> <ResourceLink type="javax.sql.DataSource" name="jdbc/LocalDB" global="jdbc/DB" /> <Context> and when i try to do this: Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env"); DataSource datasource = (DataSource)envContext.lookup("jdbc/LocalDB"); Connection con = datasource.getConnection(); i keep getting this error: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:803) at org.apache.naming.NamingContext.lookup(NamingContext.java:159) pls help tnx

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • Horde complains that Imp is not running

    - by Eric J.
    I'm a mostly-Windows guy tasked with setting up email on an Ubuntu 12.04 instance at AWS and hit the following error: When I browse to Horde, after entering my administrative credentials, I get the error message: A fatal error has occurred imp is not activated. Details have been logged for the administrator. I am following the following, quite detailed guide http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ This is happening at Step 20, at the text Now fire up you web browser and navigate to your server at http://mail.example.com/ to verify that you can log in as the configured administrative mail user. (of course I used my actual domain). Questions Where is Horde logging the "details"? Any thoughts on why this might happen? I found Google hits suggesting that php5-mcrypt might be missing, but I verified it is installed and up-to-date in my case.

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

  • AWS forwarding email to a gmail account

    - by user2433617
    So I registered a domain name. I then set up a static webpage using aws (S3 and Rout53). Now what I want to do is forward any email I get from that custom domain name to a personal email address I have set up. I can't seem to figure out how to do this. I have these record sets already: A NS SOA CNAME I believe I have to set up an MX record but not sure how. say I have the custom domain contact@custom.com and I want to redirect all email to me@personal.com. The personal email account is a gmail (google accounts) email address. Thanks.

    Read the article

  • Reasons why mod_jk wouldn't work and how to trace them

    - by Bozho
    I've been using one server, then I reinstalled everything on another server, and the mod_jk stopped working. Here is the situation: apache 2.0 sitting "in front" mod_jk used to connect to the apache to tomcat tomcat 6.0.26 used to server the actual requests I followed this tutorial. The result is: accessing http://mysite.com opens the index.html in /var/www/ accessing http://mysite.com:8080/ works OK the logs at /var/logs/apache2 show everything is OK: [Mon Mar 29 22:01:53.310 2010] [28349:3075389184] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 22:01:53 2010] [warn] No JkShmFile defined in httpd.conf. Using default /var/log/apache2/jk-runtime-status [Mon Mar 29 22:01:53 2010] [notice] Apache/2.2.9 (Debian) mod_jk/1.2.26 configured -- resuming normal operations I compared the server.xml, jk.conf, sites-enabled/mysite from the new server to those from the old one and they are identical. The domain name is the same (I updated the DNS record today, and it has refreshed successfully) So the question is, what can go wrong? Is there another place where problems would be logged, if such occur?

    Read the article

  • IIS7 Default Document not working

    - by TooFat
    I have a website running on IIS 7 that has the default document on the Web Site Level set to only index.php. If I right click on the Web Site in IIS Manager and select Explore I see that the index.php file is there. If I just browse to the web site like http://my.site.com I get the default IIS 7 logo with "Welcome" in a bunch of diff languages. If I go to http://my.site.com/index.php it brings up the site just fine. I have stopped and started the Web Site and ran iisreset but still no luck. The Default Document Section of Web.config looks like this <defaultDocument> <files> <clear /> <add value="index.php" /> </files> </defaultDocument> What am I missing?

    Read the article

  • postfix specify limited relay domain while allowing sasl-auth relay

    - by tylerl
    I'm trying to set up postfix to allow relaying under a limited set of conditions: The destination domain is one of a pre-defined list -or- The client successfully logs in Here's the relevant bits o' config: smtpd_sasl_auth_enable=yes relay_domains=example.com smtpd_recipient_restrictions=permit_auth_destination,reject_unauth_destination smtpd_client_restrictions=permit_sasl_authenticated,reject The problem is that it requires that BOTH restrictions be satisfied, rather than either-or. Which is to say, it only allows relaying if the client is authenticated AND the recipient domain is @example.com. Instead, I need it to allow relaying if either one of the requirements is satisfied. How do I do this without resorting to running SMTP on two separate ports with different rules? Note: The context is an outbound-use-only (bound to 127.0.0.1) MTA on a shared web server which all site owners are allowed to relay mail to one of the "owned" domains (not server-local, though), and for which a limited set of "trusted" site owners are allowed to relay mail without restriction provided they have a valid SMTP login.

    Read the article

  • Apache KeepAlive in child location not working

    - by Mark Beaton
    I'm trying to turn keep-alive connections off for a requests to a child folder in Apache, but when I reload the config I get the following error: KeepAlive not allowed here Here's my vhost config: <VirtualHost *:80> ServerAdmin mark@example.com ServerName example.com DocumentRoot /srv/www/mysite DirectoryIndex index.html <Location /subfolder> KeepAlive Off </Location> </VirtualHost> I've tried using <Directory> as well, but no go there either. Any ideas? I'd rather not turn keep-alive off for the whole site...

    Read the article

  • Pidgin not working with Gtalk

    - by Selvakumar Ponnusamy
    I have downloaded latest Pidgin(version 2.10.6) for Windows and tried to gtalk account to it. It shows "not authorized" error. I have tried many options given in the net and its not working for me, Below are the values I have given, Basic Tab: Protocal: XMPP Username: <my username> Domain: gmail.com Password: <My Password> and enabled Remember password check box Advanced Tab: Connection security: Require Encryption (Default) Unhecked "Allow plaintext auth over unencrypted streams" (Default) Connection Port: 5222 (Default) Connect server: talk.google.com File Transfer proxies: proxy.eu.jabber.org (default) BOSH URL: <Empty> (default) I enabled two step verification process for my gmail account, So I created application specific password and given here. But Its not working. Please help me what could be the problem and how to resolve it?

    Read the article

< Previous Page | 917 918 919 920 921 922 923 924 925 926 927 928  | Next Page >