Search Results

Search found 5398 results on 216 pages for 'xorg conf'.

Page 141/216 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Apache server in raspberry PI not visible from outside( public IP)

    - by Kronos
    I have made a fresh install of Arch Linux ARM into a Raspberry PI and I mounted there a LAMP, all fresh. I have another Arch(x86) in my laptop with Apache also there, and as far as I know, two web servers cannot run in the same network segment so, the problem is as follows. I my laptop, having Apache running, if I enter via the public ip of my network everything turns ok and I can see my website but, (obviously turning this server down) if I enter from the public IP with the Apache running in the raspberry pi( yes, only that Apache running) i cannot see my website in there. Also, if I access via local network it is a normal success, I can see my website. So, I can enter my raspberry website only via local but in my other web server i can enter it via local and public. I have the same conf files in both of them so what is the difference? I was planning in making the rpi as a development server. Thanks in advance

    Read the article

  • Setting up a static IP address (public) in Ubuntu

    - by ycseattle
    I have a business class internet connection and need to setup a static ip address for a machine. I did a search online and only find how to setup static local ip addresses (like 192.168..). I tried the same technique, and only setup the ip address and netmask, but after restart networking the computer could not connect to the outside world. This is what I did: 1) edit /etc/network/interfaces iface eth0 inet static address 173.10.xxx.xx netmask 255.255.255.252 2) edit /etc/resolv.conf search wp.comcast.net nameserver xx.xx.xx.xxx nameserver xx.xx.xx.xxx 3) restart network sudo /etc/init.d/networking restart Now the last step didn't report error, ifconfig shows the ip address was set, but this server cannot connect to outside world, ping google.com and reports "unknown host google.com". Any ideas?

    Read the article

  • haproxy not passing X_FORWARD_FOR on HTTP POST

    - by Mark L
    Hello, I've setup HAProxy with the option forwardfor option so it'll pass on the user's IP to PHP via $_SERVER[ "HTTP_X_FORWARDED_FOR" ]. If the page request isn't a POST it's populated fine but if it is then it won't be populated. Any ideas where I've gone wrong? Thanks everyone! My whole HAProxy conf file for reference: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 4096 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm :80 mode http balance roundrobin option forwardfor server webA 192.168.240.4 weight 1 maxconn 2048 check server webB 192.168.240.3 weight 1 maxconn 2048 check listen smtp :25 mode tcp option tcplog balance roundrobin server smtp 192.168.240.4:25 check

    Read the article

  • Plesk file permissions - Apache/PHP conflicting with user accounts.

    - by hfidgen
    Hiya, I'm building a Drupal site which performs various automatic disk operations using the apache user (id=40). The problem is that the site was set up on a subdomain belonging to user ID 10001 (ie my main FTP account) so the filesystem belongs to that user ID. So I keep getting errors like this: warning: move_uploaded_file() [function.move-uploaded-file]: SAFE MODE Restriction in effect. The script whose uid is 10001 is not allowed to access /var/www/vhosts/domain.com/httpdocs/sites/default/files/images/user owned by uid 48 in /var/www/vhosts/domain.com/httpdocs/includes/file.inc on line 579. I've tried changing the apache group in httpd.conf to apache:psacln, psacln being the default group for all web users but that's not helped. The situation now is: ..../files/images/ = 777 and chown = ftplogin:psacln ..../files/images/user = 775 and chown = apache:psacln ..../files/tmp = 777 and chown = ftplogin:psacln So apparently uid 40 and 10001 both have permissions to write to any of the 3 directories involved, but still can't. Am i missing something here? Can anyone help? Thanks!

    Read the article

  • Making working passenger on working apache

    - by fl00r
    Hi! I've got Apache (-v): Server version: Apache/2.0.63 Server built: Nov 29 2009 15:23:34 Cpanel::Easy::Apache v3.2.0 rev4899 I want to start new Sinatra application on passenger. I've just installed passenger gem. So now I need to set up apache configuration. In httpd.conf there are many settings of others applications on server. So I just can't reinstall apache with passenger-install-apache2-module. I need to set up exist Apache with passenger. What have I do now?

    Read the article

  • Apache2 alias in virtual host

    - by 0x7c00
    I have multiple virtual host in one server and plan to has some alias setup in one virtualhost. So I add the Alias /foo/ /path/to/foo/ in virtualhost directive,but it has no effect. request of host1/foo/ will return 404. But if I add this to /etc/apache2/mods-available/alias.conf, it works. But the problem is host2 will also share this alias. Is there a way to make the alias work only for host1? B.T.W, I use apache2ctl -l, there's no mod_alias.c listed, weird.

    Read the article

  • How to add exceptions to apache reverse proxy rules

    - by Tania
    I am trying to set a Apache reverse proxy so that requests get proxyed to another application running on 8080. However, I want some directories to be directly served rather than forwarded to proxy. What I want is: http://localhost/ - http:// localhost:8080/myapp http:// localhost/images - /var/www/html/images http:// localhost/anything-else - http:// localhost:8080/myapp/anyhthing-else My current httpd.conf is ProxyRequests Off ProxyTimeout 600 ProxyPreserveHost On ProxyPass / http:// localhost:8080/ ProxyPassReverse / http:// localhost:8080/ RewriteEngine On RewriteRule ^/(.*) http:// localhost:8080/VirtualHostBase/http/%{SERVER_NAME}:80/myapp/VirtualHostRoot/$1 [L,P] What configuration should I do to make the local path exception to work? Thank you, Tania

    Read the article

  • Is there a maximum of open files per process in Linux?

    - by Malax
    My question is pretty simple and is actually stated in the title. One of my applications throws errors regarding "too many open files" at me, even tho the limit for the user the application runs with is higher than the default of 1024 (lsof -u $USER reports 3000 open fds). Because I cannot imagine why this happens, I guess there might be a maximum per process. Any idea is very appreciated! Edit: Some values that might help... root@Debian-60-squeeze-64-minimal ~ # ulimit -n 100000 root@Debian-60-squeeze-64-minimal ~ # tail -n 4 /etc/security/limits.conf myapp soft nofile 100000 myapp hard nofile 1000000 root soft nofile 100000 root hard nofile 1000000 root@Debian-60-squeeze-64-minimal ~ # lsof -n -u myapp | wc -l 2708

    Read the article

  • DHCPD Offering ip address on wrong subnet

    - by Logan
    I just recently added a new subnet for our wireless network to our DHCP configuration for the 192.168.254.0 subnet. Most of the time when wireless clients connect it works just fine. However, sometimes on seemingly random occasions the DHCP server will send out a DHCPOFFER with an IP address on the wrong subnet. Example: dhcpd: DHCPDISCOVER from MACADDRESS (ThinkBook2) via 192.168.254.1 dhcpd: DHCPOFFER on 192.168.22.236 to MACADDRESS (ThinkBook2) via 192.168.254.1 Here is the subnet configuration in dhcpd.conf: subnet 192.168.254.0 netmask 255.255.255.0 { option routers 192.168.254.1; range 192.168.254.34 192.168.254.254; default-lease-time 14400; authoritative; } How can I make sure the server always sends out an IP address on the right subnet?

    Read the article

  • Bypass spam check for Auth users in postfix

    - by magiza83
    I would like to know if there is any option to "FILTER" auth users in postfix. Let me explain me better, I have the amavis and dspam services between postfix(25) and postfix(10026) but I would like to avoid this check if the users are authenticated. postfix(25)->policyd(10031)->amavis(10024)->postfix(10025)->dspam(dspam.sock)->postfix(10026)--->cyrus | /|\ |________auth users______________________________________________________________| my conf is: main.cf ... smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_path = smtpd smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, check_policy_service inet:127.0.0.1:10040, reject_invalid_hostname, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, check_policy_service inet:127.0.0.1:10031, permit_mynetworks, reject ... I would like something like "FILTER smtp:localhost:10026" in case they are authenticated, because in my actual configuration i'm only avoiding policyd, but not amavis and dspam. Thanks.

    Read the article

  • How can I secure Postgres for remote access when not in a private network?

    - by orokusaki
    I have a database server on a VMWare VM (Ubuntu 12.04.1 LTS server), and it just occurred to me that the server is accessible via the web, since the same physical server contains a VM that hosts public websites. My iptables in the database are such that only SSH traffic, loopback traffic, and TCP on port 5432 are allowed. I will only allow host access to the Postgres server from the IP of the other VM on the same physical machine. Does this seem sufficient for security, assuming there aren't gaping holes in my general OS configuration, or is Postgres one of those services that should never be web facing, (assuming there are some of "those"). Will I need to use hostssl instead of host in my pg_hba.conf, even though the data will travel only on my own network, presumably?

    Read the article

  • How to create public html (apache2) with LDAP authentication?

    - by borjamf
    Im running Apache2 on Ubuntu 12.04 Server because I want to create a home directory for each ldap user. I'm using LDAP for authentication and it's working ok. Also I've done some tests with LDAP module for Apache2 and it's working ok. The problem with this LDAP authentication is that any success login can access to ~user/public_html, even if the user is not the owner of that home. I dont know how to control that, for example, userldap2 access to userldap1/public_html. I want that only the userldap1 access to userldap1. Could anybody tell me how to control that with LDAP authentication? I hope that you'll understand me. My config (auth_ldap.conf) <Directory /home/disco2/*/public_html> AuthName "Authentication" AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative off AuthLDAPURL ldap://prueba.borja/dc=prueba,dc=borja?uid? Require ldap-filter objectClass=posixAccount </Directory>

    Read the article

  • CentOS disable filesystem check: superblock last mount time is in the future

    - by Zac B
    I'm persistently getting the "Superblock last mount time is in the future" error when booting CentOS 6. I've seen other questions which ask how to resolve this error, but I know exactly why it's occurring: our development/testing VMs regularly have their date set to times far from the present, and have all of their filesystems remounted. What I want to know is: how do I disable all consistency checking for superblock mount time in centOS? I've tried tune2fs -i 0 <device> and setting buggy_init_scripts=1 in /etc/e2fsck.conf and neither has worked; the problem persists.

    Read the article

  • Virtualmin - Added Virtual Server - Stopped access to Rails app?

    - by Dan
    Hi, Sorry if this sounds pretty simple, I'm new to Virtualmin and running servers in general. I recently purchased a VPS and installed Virtualmin with no problems. I then installed mod_rails and uploaded my first rails app, which I got working by adding the following to my apache httpd.conf file: <VirtualHost *:80> ServerName testing.mydomain.com DocumentRoot /home/myapp/public <Directory /home/myapp/public> Allow from All AllowOverride all Options -MultiViews </Directory> RailsBaseURI / </VirtualHost> I then tried adding a virtual server through Virtualmin, using mydomain.com. Now, the site this created (plus several sub-servers) and working as expected. However, my original Rails app is no longer accessible. The URL now sends me to the parent application (ie mydomain.com) The Rails app is not located within the parent's application directory, would this be a problem? Can anyone help? Any advice appreciated. Thanks.

    Read the article

  • Making lighttpd redirect from www.exampe.com to www.example.com/cgi-bin/index.pl

    - by jarmund
    What the title says.. www.example.com is defined in lighttpd.conf as a virtual host: $HTTP["host"] =~ "(^|\.)example.com$" { server.document-root = "/usr/www/example.com/http" accesslog.filename = "/var/log/www/example.com/access.log" $HTTP["url"] =~ ".pl$" { cgi.assign = (".pl" => "/usr/bin/perl" ) } } However, instead of going by the files listed in index-file.names (the usual index.html, default.html, etc), i want all requests to the root of the virtual host to be forwarded to /cgi-bin/index.pl. What's the easiest/best way of doing this? This need is a special case, and will only apply to this virtualhost. Is it possible to have that particular virtualhost send a redirect in the header?

    Read the article

  • What is the correct configuration for multiple apache2 vhosts and multiple php5-fpm pools?

    - by farinspace
    I have a group of sites (group A) which I would like to run using one php5-fpm pool and a second group of sites (group B) which I would like to run using a second php5-fpm pool. I can effectively define/create the pool in the fpm.conf file and I confirmed that it is running with the different user/group I've defined. However I am unclear as to how to setup the apache virtual host config. I've tried a few apache2 configurations but I seem to not be able to add the second pool. If you've done this please help.

    Read the article

  • Ubuntu 13.10: nslookup not automatically appending DNS suffixes

    - by Alex
    When I configure Ubuntu 13.10 server I ran into a problem: Usally (working on 12.10 machines) I add the following information in my /etc/resolv.conf file: nameserver 192.168.2.180 domain our.domain.com Normally, when I then ping a given host , .e.g: ping host01 It would resolve the FQDN to host01.our.domain.com However in ubuntu 13.10 this doesn't seem to be working, it just returns the following: ~# nslookup host01 Server: 192.168.2.180 Address: 192.168.2.180#53 ** server can't find host01: SERVFAIL Which is normaly since the DNS server doesnt respont to a 'host01' request. However if I do the same nslookup on an Ubuntu 12.10 machine it automatically appends the 'our.domain.com' suffix to whatever I throw at it that doens't already have this suffix. Is this a 13.10 bug, or am I doing something wrong?

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • Automatically reconnect to ODBC sources?

    - by stefan.at.wpf
    I am using Asterisk 1.8.10.1 and a MySQL database connected via ODBC to store CDRs. When my MySQL database isn't available when Asterisk starts or has an outage while Asterisk is running, I would expect Asterisk to retry to connect to the database, but this doesn't happen! Anyone knows where I can enable some kidn of automatic reconnect to databases in Asterisk? My res_odbc.conf looks like this: [asterisk] enabled => yes dsn => asterisk-connector username => user password => pass pre-connect => yes pooling => no limit => 1 idlecheck => 1 negative_connection_cache => 1

    Read the article

  • Fedora14 serial console how-to needed

    - by lamba2
    Has anyone ever got a serial console working in fedora 14 ? Is it as simple as adding to grub: serial --unit=0 --speed=38400 terminal --timeout=10 serial console and add to the kernel lines: console=tty0 console=ttyS0,38400 ??? If so, this isn't working for me. I have agetty installed, and im using minicom, although i've heard you can also use "screen /dev/ttyUSB0" on the client side. The /etc/init/serial.conf file suggests it should be working, but nothing. Currently getting no joy from any of this after 2 days. Does anyone know a method that definitely works on fedora 14 ? (no /etc/event.d/ needed or such) edit: Client side im using a null modem cable and usb-serial adaptor.

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • How can I disable reverse DNS in Apache 2?

    - by Creighton Hale
    I want to disable reverse DNS in Apache 2. I have done the following steps: In apache2/apache2.conf file ,HostnameLookups is set as OFF Tcpdump session confirmed thatApache was doing double reverse lookups even though the HostnameLookupsdirective was clearly turned off. No hostnames insites-available. The problem still remains. UPD: version of apache is dpkg -l | grep apache2 ii apache2-mpm-prefork 2.2.16-6+squeeze4 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze4 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze4 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze4 Apache HTTP Server common files apache2 -l Compiled in modules: core.c mod_log_config.c mod_logio.c prefork.c http_core.c mod_so.c I think mod_security is not present.

    Read the article

  • Getting VSFTP running on Fedora 14

    - by Louis W
    Having troubles getting VSFTPD running on Fedora 14. Here is what I have done so far, please let me know if I am missing something. When I try to connect through FTP it says connection time out. Installed VSFTP with yum yum install vsftpd Edited config file vi /etc/vsftpd/vsftpd.conf Started service and made sure it would always start up service vsftpd start chkconfig vsftpd on Added and configured a new user /usr/sbin/useradd upload /usr/bin/passwd upload usermod -c "This user cannot login to a shell" -s /sbin/nologin upload Added firewall rules iptables -A INPUT -p tcp --dport 21 -j ACCEPT iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT service iptables save service iptables restart Checked netstat (In reply to comment below) tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 23752/vsftpd

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >