Search Results

Search found 76098 results on 3044 pages for 'http gdata youtube com'.

Page 484/3044 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Point dns server to root dns servers [duplicate]

    - by Dhaksh
    This question already has an answer here: What is a glue record? 3 answers Why does DNS work the way it does? 4 answers I have setup a custom authoritative only DNS server using bind9. Its a Master ans Slave method. Assume DNS Servers are: ns1.customdnsserver.com [192.168.91.129] ==> Master ns2.customdnsserver.com [192.168.91.130] ==> Slave Now i will host few shared hosting websites in my own web server. Where i will link above Nameservers to my domains in shared hosting. My Question is: How do i tell root DNS servers about my own authoritative only DNS server? So that when someone queries for domain www.example.com and if the domain's website is hosted in my shared hosting i want root servers to point the query to my own DNS Server so that the www.example.com get resolved for IP address.

    Read the article

  • help with sendmail configuration to send mail through my gmail account?

    - by pradeepa
    This is the sendmail.ini file what to change now # Example for a user configuration file # Set default values for all following accounts. defaults logfile "\xampp\sendmail\sendmail.log" # Mercury account Mercury host localhost from postmaster@localhost auth off # A freemail service example account gmail tls on tls_certcheck off host smtp.gmail.com from ****@gmail.com auth on user ****@gmail.com password ******* # Set a default account account default : Mercury

    Read the article

  • mod_rewrite for selectors with .html

    - by user1720607
    We have a website where the URL looks something like, www.example.com/about.smart.html ( "smart" being selector added on the app server based on the useragent if its a smart phone device) We need to redirect the page to 404 if the URL is changed by the user as like below: www.example.com/about.abc.xyz.smart.html www.example.com/about.smart.abc.html I tried with the below rule, but this redirects to 404 only for 1) and not for 2) RewriteCond %{REQUEST_URI} !^(.*)(-)\.html$ RewriteRule (.*)\.(.*).smart.html$ - [R=404,L] Any pointers on this would be of great help.

    Read the article

  • How can I get WAMP and a domain name to work on a non-standard port?

    - by David Murdoch
    I have read countless articles on setting up a domain on WAMP to listen on a port other than 80; none of them are working. I've got Windows Server 2008 (Standard) with IIS 7 installed and running on port 80 (and 443). I've got WAMP installed with the following configuration. Listen 81 ServerName sub.example.com:81 DocumentRoot "C:/Path/To/www" <Directory "C:/Path/To/www"> Options All MultiViews AllowOverride All # onlineoffline tag - don't remove Order Allow,Deny Allow from all </Directory> localhost:81 works with the above configuration but sub.example.com:81 does not. Just to make sure my firewall wasn't getting in the way I have disabled it completely. My sub.example.com domain is already pointing to my server and works on IIS on port 80. Also, if I disable IIS and change the Apache port from 81 to 80 it works. Yes, I am restarting Apache after each httpd.conf change. :-) I don't need any other domain (or sub domains [I don't even care about localhost]) configured which is why I'm not using a VirtualHost. Any ideas what is going on here? What could I be doing wrong? Update Changing Listen to 80 but keeping ServerName as sub.example.com:81 causes navigation to sub.example.com:80 to work; this just doesn't seem right to me. Could ServerName be ignoring the :port part somehow? netstat -a -n | find "TCP": >netstat -a -n | find "TCP" TCP 0.0.0.0:81 0.0.0.0:0 LISTENING TCP 0.0.0.0:135 0.0.0.0:0 LISTENING TCP 0.0.0.0:445 0.0.0.0:0 LISTENING TCP 0.0.0.0:912 0.0.0.0:0 LISTENING ... TCP 127.0.0.1:81 127.0.0.1:49709 TIME_WAIT ...

    Read the article

  • openldap search acl

    - by Patrick
    I'm trying to write an access control for OpenLDAP to allow a user to search with a certain base dn, but only get results back from certain sub dn's. I've played with lots of different rules but cant get it to work. I'm not sure its even possible. For example: I have the user with the dn uid=testuser,ou=people,dc=example,dc=com. I want this user to be able to search with a base of dc=example,dc=com and get back entries in ou=people,dc=example,dc=com. There are lots of other sub OUs under dc=example,dc=com, but only entries in ou=people should be returned (for bonus, I'd only like certain attributes to be returned as well). Can this be done?

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • exim4 to relay emails

    - by Matthieu
    I have exim4 installed on a Linux box. The basics work fine and I can send email from that machine without any problem to whatever email address. I also have a printer/scanner which is capable to send scans as emails. It needs an SMTP gateway to be able to do that. So I give the IP address of that Linux box, changed the configuration a little bit but still cannot get it to work. After I run dpkg-reconfigure exim4-config, here is what I get in /etc/exim4/update-exim4.conf.conf : dc_eximconfig_configtype='internet' dc_other_hostnames='' dc_local_interfaces='127.0.0.1;192.168.2.2' dc_readhost='' dc_relay_domains='mycompanyemail.com' dc_minimaldns='false' dc_relay_nets='192.168.2.0/24' dc_smarthost='' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' My problem is that with this configuration, I can only send to emails @mycompanyemail.com... It says I can use wildcard, but when I do that, the '*' is replaced by whatever filename is in the directory where I run all that. How can I configure it to be able to send emails with any domain ? Or am I doing it wrong ? EDIT : here is the part of the log that's causing trouble : 2011-08-03 16:28:18 H=(NPI2D389C) [192.168.2.20] F=<[email protected]> rejected RCPT <[email protected]>: relay not permitted The first part ([email protected]) does not matter. I changed the email address. The point is that if this email is @mycompanyemail.com then everything works fine. Anything else does not work. I could add gmail.com, but I am looking to have any domain working...

    Read the article

  • rkhunter not using root external email on ubuntu

    - by Zen
    i have installed rkhunter on ubuntu 10.04 LTS when i try to test rkhunter report it doesn't send email to my external root email recipient. i can send email only editing /etc/default/rkhunter by replacing this row REPORT_EMAIL="root" with the desired recipient REPORT_EMAIL="admin@example.com" These are my config file settings: /root/.forward admin@example.com and /etc/aliases root: admin@example.com But it doesn't work with root recipient. Any suggestion?

    Read the article

  • Is there a way to import email from the raw email files?

    - by Chris Schmitz
    I have a client who recently switched hosts. When they switched hosts they didn't backup their email and updated their configuration settings so they lost everything. However, I was able to log in to their old hosting control panel and download their mail folder. I am wondering if there is a way to extract their emails and/or contacts from the files. I'm not sure what type of files they are, there is no extension, but the folder directory is structured like this: mail/ .Drafts/ .Sent/ .Trash/ cur/ new/ theirdomain.com/ tmp/ .theiremail@theirdomain.com maildir Inside of the theirdomain.com folder, there is a folder for each account and inside of that is a folder called "cur" which has a whole bunch of files with names like 1292945327.H169813P25958.uscentral21.myserverhosts.com,S=10117/2,S and if I preview those files I can see the actual email messages inside of them but I have no idea how to get that information from those files to an email client. Anyone know of a way to work with these files? Thanks in advance for any insight you can share!

    Read the article

  • Looking to use .htaccess to create SEO friendly URLs

    - by Ray
    For SEO purposes, I need someone to modify my .htaccess file. Here's what I need to do: current URL: http://www.abc.com/index.php?page=show_type&ord=1 to new URL: http://www.abc.com/amazing Please note that that if someone types in http://www.abc.com/amazing, they must be served content from the current URL, but the new URL must stay in the address bar. I tried this and it didn't seem to work RewriteEngine On RewriteRule ^/?amazing/([^/]+)/([^/]+)/([^/]+)/([^/]+)/([^/]+)/ /index.php?page=show_type&ord=1

    Read the article

  • Managing two domains in one virtual server [on hold]

    - by Buddhika Ariyaratne
    I have a virtual server with Windows Server 2012 on which I need to run two applications for two separate customers. Both applications run on GlassFish in port 8080. The applications run on http://localhost:8080/roseth and http://localhost:8080/ruhunu My virtual server provider has given three IP addresses. How can a I assign one address for each application. For example, if a user type www.ruhunu.org, an arbitrary URL , I want to direct to http://localhost:8080/ruhunu and www.roseth.org to http://localhost:8080/roseth.

    Read the article

  • Qmail does not forward mail to a specific domain

    - by jahufar
    Hi I have a dedicated hosting account with GoDaddy.com. I've pointed my domain's email to work with Google apps. The server has qmail running and it forwards email to all domains just fine except for MY domain (mydomain.com) - it says 550 User xxx not found in mydomain.com I believe it thinks I've hosted email on the server itself (not gmail) and it's trying to validate if xxx@mydomain.com exists on my server (which is not the case since it's all handled by google apps). How do I make it forward mail to all domains? Thank you :) EDIT: I would only need it forwarding emails if the connection originates from 127.0.0.1 - which I believe is the default way it's configured. So to clarify: I just need a purely forwarded configuration so my PHP scripts have the ability to send email.

    Read the article

  • Intermittent 404 on select assets, LAMP stack

    - by Tom Lagier
    We have a LAMP stack WordPress server that is serving most assets correctly. However, one plugin's CSS file and several images are returning soft 404s roughly 20% of the time. I can't find any reference to the 404 in the access logs, but the browser is definitely receiving a 404 response from somewhere (WordPress, I would assume). When I use an alias URL that does not match the site URL but does resolve to the asset path, the resource loads correctly 100% of the time. However, using the site url only resolves for the select, problematic assets 20% of the time. You can test one of the problematic assets here: http://www.mreco.org/wp-content/uploads/2014/05/zero-cost.jpg However the alias link always resolves correctly: http://mr-eco.wordpress.promocampaigns.com/wp-content/uploads/2014/05/zero-cost.jpg Stranger, if I attempt to access outdated content that definitely does not exist on the server, at the live URL it returns the content roughly 50% of the time. Using the alias link, it 404s 100% of the time - the correct behavior. Error log and PHP error log are clean. A sample access log (pulled from grep 'zero-cost.jpg' /var/log/httpd/mr-eco-access_log) from several refreshes of the live direct link (where I am not seeing any 404's): 10.166.202.202 - - [28/May/2014:20:27:41 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:42 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.166.202.202 - - [28/May/2014:20:27:43 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 304 - 10.176.201.37 - - [28/May/2014:20:27:56 +0000] "GET /wp-content/uploads/2014/05/zero-cost.jpg HTTP/1.1" 200 57027 Chrome's dev tools list the following network activity before displaying 404 page content: zero-cost.jpg /wp-content/uploads/2014/05 GET 404 Not Found text/html Other 15.9?KB 73.2?KB 953?ms 947?ms My Apache configuration is standard, I've listed the virtual host entry and .htaccess file below. I can provide other parts of Apache config if necessary. Virtual host: <VirtualHost *:80> DocumentRoot /var/www/public_html/mr-eco.wordpress.promocampaigns.com ServerName www.mreco.org ServerAlias mreco.org mr-eco.wordpress.promocampaigns.com ErrorLog logs/mr-eco-error_log CustomLog logs/mr-eco-access_log common <Directory /var/www/public_html/mr-eco.wordpress.promocampaigns.com> AllowOverride All SetOutputFilter DEFLATE </Directory> </VirtualHost> .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I have checked for multiple A records and can confirm that there is a single A record pointing at the domain: ;; ANSWER SECTION: mreco.org. 60 IN A 50.18.58.174 I'm fairly new to systems administration, and at a complete loss as to what could cause this. In the past, inconsistently 404ing assets have been because of out-of-sync instances behind a load balancer. In this case, it is a single instance behind the load balancer. Because of the inconsistency, it feels like a caching issue. We don't make use of Apache caching, and as far as I know WordPress should not be caching either. What I've done so far: Reset WordPress permalinks Disabled WordPress plugins Re-generated WordPress .htaccess file Swapped ServerName and ServerAlias directives Cleared browser cache Confirmed disk location of resources Checked PHP, access, and error logs Confirmed correct DNS setup (can post if necessary) I'm at a total loss. Thanks for helping me out!

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected].com OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected].com's password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Recommended apps for securing/protecting a new desktop machine install?

    - by Eddie Parker
    I'm hoping to harness the collective tips of superuser to gather recommended apps/configurations to keep a new desktop clean, virus free, and hopefully lower software rot. I ask because I've recently come across tools like dropbox, deepfreeze, returnil, etc, and I'm curious what other ones are out there to protect a new box. I personally am interested in Windows, but feel free to comment on whatever OS you'd like, freeware or otherwise. Ideally specify the OS in your answer(s). One answer per program please. Then, rather than duplicate posts, vote for the program if it is already listed. UPDATE: It's been noted that there are other questions similar to this one [1], so I'd ask that these answers focus on security and protection. [1] Related questions: http://superuser.com/questions/1241/what-are-some-must-have-windows-programs http://superuser.com/questions/1191/what-are-some-must-have-mac-os-x-programs http://superuser.com/questions/1430/must-have-linux-software http://superuser.com/questions/3855/must-have-networking-security-tools

    Read the article

  • DNS lookup of GTLD servers using dig

    - by iamrohitbanga
    I ran the following command on linux >> dig . I got the following response ;; AUTHORITY SECTION: . 281 IN SOA A.ROOT-SERVERS.NET. NSTLD.VERISIGN-GRS.COM. 2010032400 1800 900 604800 86400 why does the response not contain the IP address of the root server? what do the numbers at the end of the reply mean. one of them is probably (definitely) the date. why does it report 2 root servers a.root and nstld.verisign? when i send the following queries dig com. ;; AUTHORITY SECTION:com. 51 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1269425283 1800 900 604800 86400 again i do not get the ip addresses. when i query for the gtld server specified i can get the ip address. why is the response of dig net. same as that of dig com. except that instead of 51 we have 19 in the response.

    Read the article

  • Apache: Redirect entire virtual host to a URL

    - by DrStalker
    Centos 5.2, Apache 2.2.3 I want to configure Apache to redirect any URL under mail.mydomain.com to the single URL https://mail.google.com/a/mydomain.com How can I set this up with Apache? I have LoadModule rewrite_module modules/mod_rewrite.so but I'm not sure how to actually use this and I can't find a really simple example; I assume I set up a new virtual host with some form of rewrite directive, but how is this done?

    Read the article

  • mod_rewrite "Request exceeded the limit of 10 internal redirects due to probable configuration error."

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • Apache misbehaving (returning 404s)

    - by OC2PS
    CentOS 6.4 64-bit Apache 2.4.6 PHP-FPM 5.5.4 Homepage from root loads fine http://csillamvilag.com But all other pages return 404 (CMS is WordPress). I am also able to access and log into WordPress backend. Additionally, Menalto Gallery 3 seems to be loading ok http://csillamvilag.com/kepek/ but all OpenCart pages return 404 http://csillamvilag.com/shop/ or http://csillamvilag.com/shop/hu/ Apache is running as user apache. All relevant WordPress and OpenCart files are owned by user apache. I have a suspicion that it might be a rewrite issue, but I checked .htaccess for both WordPress and OpenCart, and they look ok. e.g. WordPress/root .htaccess is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule>

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • sendmail user unknown - debian lenny

    - by Rimian
    My php's mail() function just stopped working a short while ago. It's started returning FALSE. I am not much of a sysadmin so please forgive my ignorance. I set my php.ini send_path option to: "sendmail_path = /usr/sbin/sendmail -t -i" and restarted apache. Then, I learnt how to test sendmail like so: sudo /usr/sbin/sendmail -bv mail@example.com mail@example.com... deliverable: mailer esmtp, host example.com., user mail@example.com The example email is a real mail box. I have also seen unknown user messages in the mail log. Can anyone please help me debug this? Cheers, Rim

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for foo@bar.com (in reply to RCPT TO command))

    Read the article

  • Possible to direct naked domain to external IP

    - by Luke
    So I found this post: configure Bind to have a custom domain on tumblr and I was trying to ask a related question: Would it be possible to set up an A record pointing traffic to domain.com to Tumblr and feed.domain.com to the IP address of my choice? In other words, by setting up a naked domain A record to Tumblr's IP, will I inherently lose traffic to feed.domain.com? Can I write another A record for the specific subdomains I want to point to my server? I hope this makes sense.

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >