Search Results

Search found 11453 results on 459 pages for 'apache axis'.

Page 416/459 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • Internal/External Moodle - DNS

    - by Chief17
    Network diagram: I have a moodle (a VLE) setup that I want to be internally and externally accessible. The green route on the diagram below is the route I would like the traffic to take when the user is inside the LAN, and the red route is seemingly what it does take. The website has a domain name (like most websites do). From the User PC, if I ping the domain name, I get the internal IP of the webserver (because of a hosts file entry), if I nslookup the domain name I also get the internal IP of the webserver (because of an A record on my DNS server). Running the same two commands on the webserver gives me the webservers external IP. (going well so far) If I use PHPs gethostbyname() on the moodle website and use domain name as a parameter (getting php/apache to resolve the hostname) it returns the exernal IP of the webserver (good news, DNS seems to be doing what I want it to). All things so far seem to be going well. The only thing that is confusing me and preventing the moodle single sign on from working is the fact that if I get moodle to show my IP address, it says that it is an external one (outside my NATting firewall) when it should show an internal IP. This is the issue, any ideas on how to go about resolving this? Any ideas on tests I can perform (I have also tried a tracert and the request goes directly to the webserver), anything? Thanks all!

    Read the article

  • Rewriting html links with modproxyperlhtml

    - by Juancho
    I'm trying to setup an Apache reverse proxy using mod_proxy and modproxyperlhtml. This is my scenario: Domain for the proxy: http : // www.myserver.com/ Destination server (the one behind the proxy): http : // myserver.foo.com/myapp/ I'm sorry that I have to space the URL but serverfault doesn't allow me to post more than two links as "spam protection mechanism" (ridiculous on a site where you ask questions about servers and it's really probable to post more than two times the same URL's to explain your question). The idea is to map http : // www.myserver.com/ to http : // myserver.foo.com/myapp/ . Note that the path on the proxy is / and on the destination server is /myapp/. All of the examples I can find on the net (like the one on the official documentation of modproxyperlhtml) are the other way around, ie. path on the proxy /myapp/ and path on the destination server /. This is my current config that doesn't work: ProxyPass / http : // myserver.foo.com/myapp/ ProxyPassReverse / http : // myserver.foo.com/myapp/ PerlInputFilterHandler Apache2::ModProxyPerlHtml PerlOutputFilterHandler Apache2::ModProxyPerlHtml SetHandler perl-script PerlSetVar ProxyHTMLVerbose "On" LogLevel Info <Location / > # ProxyPassReverse /myapp/ PerlAddVar ProxyHTMLURLMap "/myapp/ /" PerlAddVar ProxyHTMLURLMap "http : // myserver.foo.com /" </Location> The examples use the ProxyPassReverse inside the Location directive, but on my case doesn't work, only when outside. With this configuration the links aren't being replaced as they should be, my guess is that the location isn't being found, thus the rewrite rules aren't being applied. The error log only shows that it uncompresses the content, searches it but doesn't find anything: [Tue Nov 13 0842:05 2012] [warn] [ModProxyPerlHtml] Uncompressing text/html; charset=UTF-8, Content-Encoding: gzip\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n [Tue Nov 13 08:42:05 2012] [warn] [ModProxyPerlHtml] Compressing output as Content-Encoding: gzip\n [Tue Nov 13 08:42:06 2012] [warn] [ModProxyPerlHtml] Content-type 'text/html; charset=UTF-8' match: /(text\\/javascript|text\\/html|text\\/css|text\\/xml|application\\/.*javascript|application\\/.*xml)/is\n What could be wrong ?

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • How to send mail with PHP [migrated]

    - by roth66
    My litle problem is about mail() function in PHP, it doesnt want to send emails, to my local server, or anywhere else. I don't think that function was supposed to send mail to adresses like: [email protected]; So I've installed a mail server: hmailserver, I installed a client: dream-mail; I installed sendmaill.exe; (actually unzipped it in a folder, then in php.ini set the sendmail_path to point to it) After countless trials and errors, it still doesn't work. my system would comprise in an Apache server 2.2, and PHP (last version I think 5.3 or somehing), running on windows. And now for avoiding the usual questions (Did you make rules in your firewall etc etc), I guess I should mention, that there arent any connectivity issues, everything is set to "local" (localhost), port 25, 110, 143, are all opened, And, after a few days of fiddling with my brand new mail-server, I manage to make it work. THe Dream-mail client, has a test, trough which it would test its connections, and according to it, the SMTP AND POP3 connections are all successful, it even sends an email, for testing. SO ya, it would work. The problem, remains: PHP mail funcion. And I really need it, since on my website there's a contact form, and right now is useless. I've also checked the form it self, and seems to be alright.

    Read the article

  • cannot commit svn with dav on ubuntu

    - by hiddenkirby
    So there are several similar questions on serverfault ... but the solution is still eluding me. I am running subversion on ubuntu 9.04 .. through apache2.2.x .... i get Commit failed (details follow): Can't make directory '/home/kirb/svn/dav/activities.d': Permission denied when i attempt to commit. It is deffinitely a permissions issue... but how to fix it is still eluding me. my repository is in /home/kirb/svn. http://serverfault.com/questions/61573/svn-commit-error says to chgrp .. but i dont seem to be able to. all the apache dav stuff seems to be working though. I can access my repository just fine through a browser. apologies if i am missing something simple here. Thanks in advance, Kirb additional edit: i am not able to sudo chgrp on the directory at all sudo chgrp -R www-data /home/kirb/svn; chmod -R g+rwx /home/kirb/svn [sudo] password for kirb: chmod: changing permissions of/home/kirb/svn': Operation not permitted chmod: changing permissions of /home/kirb/svn/format': Operation not permitted chmod: changing permissions of/home/kirb/svn/conf': Operation not permitted chmod: cannot read directory /home/kirb/svn/conf': Permission denied chmod: changing permissions of/home/kirb/svn/locks': Operation not permitted chmod: cannot read directory /home/kirb/svn/locks': Permission denied chmod: changing permissions of/home/kirb/svn/db': Operation not permitted chmod: cannot read directory /home/kirb/svn/db': Permission denied chmod: changing permissions of/home/kirb/svn/README.txt': Operation not permitted chmod: changing permissions of /home/kirb/svn/hooks': Operation not permitted chmod: cannot read directory/home/kirb/svn/hooks': Permission denied`

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • Allow incoming connections on Windows Server 2008 R2

    - by Richard-MX
    Good day people. First, im new to Windows Server. I've always used Linux/Apache combo, but, my client has and AWS EC2 Windows Server 2008 R2 instance and he wants everything in there. Im working with IIS and PHP enabled as Fast-CGI and everything is working, but, i cant see the websites stored in it from internet. The public DNS that AWS gave us for that instance is: http://ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com/ But, if i copy paste that address, i get nothing, no IIS logo or something like that. My common sense tells me that maybe the firewall could be blocking the access. Can anyone help me and tell where to enable some rules to get this thing working? I don't wanna start enabling rules at random and make the system insecure. If you need any additional info, you can ask me and i will provide it. Thanks in advance. UPDATE: Amazon EC2 display this: Public DNS: ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com Private DNS: ip-XX-XXX-XX-252.us-west-2.compute.internal Private IPs: XX.XXX.XX.25 In my test microinstance, i just to use the Public DNS address (the one that starts with "ec2") and it works like a charm (of course, the micro instance have its own Public DNS im not assuming same address for both instances...) However, for the large instance, i tried to do the same. Set up everything as in the micro instance but if i use the Public DNS, it doesnt load anything. Im suspicious about the Windows Firewall, but, the HTTP related stuff is enabled. What should i do to get access to the large instance? I don't want to set up the domain yet, i want access from an amazon url. 2ND EDIT: all fixed. Charles pointed that maybe Security Groups was not properly set up for the instance. He was right. Just added HTTP service to the rules and all works good.

    Read the article

  • Joomla SMTP Configuration Issue

    - by msargenttrue
    I'm having an issue with the SMTP setup of my Joomla website when trying to send mass emails through the CB Mailing (Mass Email) extension. I receive this error: SMTP Error! The following recipients failed: Number of users to whom e-mail was sent: 0 (Total in list: 1) The old version of this websites mass emailer worked fine, however, in order to add Kunena Forum and maintain compatibility I had to make several upgrades to the site. Both the new version and old verson configurations are outlined below. Server for Website: Mac OS X Server 10.4.11, Apache 1.3.4.1, PHP 5.2.3, MySQL 4.1.22 Server for SMTP: Eudora Internet Mail Server 3.3.9 (EIMS Server X) New Configuration: Joomla 1.5.25, Community Builder 1.7.1, CB Paid Subscriptions (CB Subs) 1.2.2, CBMailing 2.3.4, Kunena Forum 1.7.0, Legacy 1.0 plug-in disabled Mail Settings (New Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Security: None SMTP Port: 25 SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 Old Configuration (Working SMTP Configuration): Joomla 1.5.9, Community Builder 1.2, CB Paid Subscriptions (CB Subs) 1.0.3, CB Mailing 2.1, Legacy 1.0 plug-in enabled Mail Settings (Old Config): Mailer: SMTP Server Mail from: [email protected] From Name: CASPA Sendmail Path: /usr/sbin/sendmail SMTP Authentication: Yes SMTP Username: [email protected] SMTP Password: xxxxxxx SMTP Host: 209.48.40.194 (Notice how the older version of Joomla is missing the 2 fields: SMTP Security and SMTP Port) Thanks in advance!

    Read the article

  • Ubuntu karmic, chroot, Ubuntu server 8.04 LTS and Plesk

    - by morpheous
    I am pretty new to the Linux environment, and although I have been using Ubuntu Karmic desktop for development work, I have always prefered the GUI tools and very rarely use the terminal. I am about to launch a site (a VPS) which will be running Ubuntu server 8.04 LTS and Plesk. I want to install both the server (8.04 LTS) and Plesk in a 'chroot' on my Karmic desktop. I also want to test install some packages I have written for the server, and generally familiarize myself with the environment before I signup to the hosting service. I will be running a streamline server (no GUI), with only the following software: PHP Python mySQL PostgreSQL Apache Plesk Third party packages I developed In terms of mail, I think the hosting provider provides a mail service, so I dont know if I will need to run my own mail server. I will like some advice on the following: How can I install Ubuntu 8.0.4 server LTS in a chroot? How can I install Plesk in Ubuntu 8.0.4 server LTS (from the command line) Can anyone recommend how I back up the remote server when I go live (i.e. what tools to use)?. I will need to back up both databases, as well as some config data and installation packages

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Remove server hangs, gets stuck. How to debug?

    - by bibstha
    I have an vps running on VmWare ESX with Ubuntu 8.04 LTS. It has been running smoothly for the past 3 months, however recently we've notices two strange bugs. a. The server hangs, today was second time. The nature of the hang is very strange. I can ping to the server server, it sends back response fine. However all other services like sshd, apache, mysql etc do not respond at all. When working, telnet servername 22 Escape character is '^]'. SSH-2.0-OpenSSH_5.X Debian-5ubuntu1 And other web services would run fine. When its hung, I can make tcp connections to 22 as well as 80 but receive no response at all. telnet servername 22 Escape character is '^]'. How can I debug this problem? Is there any daemons I can run that will periodically log status? Please tell me as to how to proceed with it. b. The another strange problem is that, of lately I am unable to transfer files larger than around 100KB, smaller files of around 1-2 KB works file. scp anotherserver:filename . or wget http://www.example.com/file would get stuck. There is still around 6GB of space remaining, so I don't think that is an issue. Any pointers where I should look into?

    Read the article

  • Multiple LDAP servers with mod_authn_alias: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand? AuthBasicProvider ldap1 ldap2 only means that if mod_authz_ldap can't find the user in ldap1, it will continue with ldap2. It doesn't include the failover feature (ldap1 must be running and working fine)?

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • Wake OSX 10.8 over WiFi (WoWL - Wake on WiFi Lan)

    - by WrinkledCheese
    I have a stack of Apple Mac minis running SSH servers for remote login. The problem is that I can't seem to get them to wake up. From what I gathered, as of Mac OSX 10.7 you required to have a boot time option set - darkwake=0 10.7 and darkwake=no 10.8. So I tried this and then I came to the realization that this will probably work for a wired connection but I'm using WiFi. My wired connections are used for another local subnet without Internet access, so I have to get it to wake on WiFi. I realize that I can just set the stack of Mac minis to just not sleep, but I'm looking for a sleep enabled option. These services don't require initial response speed as once the connection is made they will be active and once they are no longer active they will hopefully go back to sleep. I have a FreeBSD box running avahi-daemon in order to try and wake the Macs with the Bonjour Service but it doesn't seem to work. I tried registering the service as Gordon suggested in the below post, but that just makes it so that there isn't a timeout when discovering services and resolving them. It still doesn't allow ssh connections to port 22 when it's asleep. For reference, I want what seems like what Gordon Davisson explained on this question: Wake on Demand for Apache server in OS X 10.8

    Read the article

  • open-source LAMP package for simple accouting and receiving?

    - by user26664
    Hello. I am involved with a computer-based charity where we take donation of old equipment, often recycle it, mostly rehabilitate it and make it available through grants and 'adoptions', and sell some items. What we're looking for is a LAMP package that can handle our records of donations of equipment, print receipts for donators, and also track our thrift store sales with receipts. Donators will want receipts of their donations for tax deduction purposes. We'll want to print reports of incoming items from time to time, say monthly and yearly. For the thrift store, we'll also need receipts for that, reports of cash, especially for reconciling cash drops, and also reports of items sold from time to time. I'm thinking this might be a single package, but it might be two. We don't want to track our shop inventory with either of these programs -- that's another program/project. We just need to know what was donated and what was sold. It must be open source, and ideally we'd like it to run on LAMP -- Linux, Apache, MySQL, PHP, but we will consider other open-source platforms.

    Read the article

  • Is iptable capable of this or should I go with mod_proxy?

    - by Jesper
    I'm trying to configure my network to receive an incoming connection on one device and then redirect it to another device on a specific port. Right now I'm on about port 80 and a device running apache. The problem I'm facing is that when the forwarding is done it also sets the source ip to the first device instead of the source ip the user that connects to the service has. Let me illustrate it: [Internet User] = 7.7.7.7 connects to [Device 1] = 1.1.1.1:80 [Device 1] forwards it to [Device 2] = 1.1.1.2:80 [Device 2] outputs response that [Internet User] sees So on [Device 2] I will naturally see [Device 1]s IP in the logs, but I wanna see if there is a way to connect the internet user through [Device 1] to [Device 2] while seeing the real source IP in the logs on [Device 2]. Is that possible? My rule-set looks like this at the moment: (on Device 1) iptables -P FORWARD ACCEPT iptables -t nat -I PREROUTING -j DNAT -p tcp --dport 80 --to-destination 1.1.1.2:80 iptables -t nat -I POSTROUTING -j SNAT -p tcp -d 1.1.1.2 --to-source 1.1.1.1 On [Device 2] it accepts all incoming on port 80 from [Device 1] as well as accepts all related and established connections. So, would there be any way to get the real source onto [Device 2]? Let me know if you need more information!

    Read the article

  • WebDav System Error 67 in Windows XP

    - by Nixphoe
    Issue: I'm having issues getting WebDav to work in the command line on Windows XP, both Service Pack 2 and Service Pack 3. C:\>net use z: https://mywebsite.com/software/ System error 67 has occurred. The network name cannot be found. I have tested this with two webdav server. Both Ubuntu Apache and I Windows Server 2003 IIS. Both get the same result. Things That Haven't Worked: I've installed the following Microsoft KB on my XP machines with no avail. I've also found the following reg key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters UseBasicAuth REG_DWORD 1 I try the following when trying to use a few work around I've dug up on the web, all producing the same result. net use z: https://mywebsite.com/software net use z: https://mywebsite.com/software# net use z: https://mywebsite.com/software/ net use z: https://mywebsite.com/software/# I've also tried all the above combinations adding a user into it /user:user and /user:user@domain. I've also tried using http:// rather than https://. I've tried "\\server.com@ssl:443\folder" I've gone over networking related issues as @WesleyDavid had pointed out. Things that do work: I can connect to the webdav folder via the URL and with mapping in Network Place, with XP. But the command line doesn't work (I need a drive letter). Windows 7 works perfectly with the same command. My Delemma: I need this to work with a drive letter. What else can I try to get this working?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >