Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 632/792 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Site to Site VPN with Fault Tolerence

    - by Nordberg
    Hello, I have a situation where I require an IPSEC tunnel between two sites. Site 2 is a small branch office with basic (ADSL) connectivity and Site 1 is the "main" office with SDSL and ADSL for redundancy should the SDSL fail. From Site 1, all traffic bound for the 172.0.0.0 network will then be sent down another IPSEC tunnel to a supplier's Remote Server. See this page for the basic premise (this is a rough idea and things can be moved about etc...) I am considering specifying Cisco ASA devices as the firewalls for both sites for all connections. Would it be possible to employ something like HSRC to provide a backup at Site 1 should the SDSL go down? I suppose the key aims here are that Site 2 can somehow failover to initiate a VPN to the ASA behind the ADSL at Site 1. I will have a 21 subnet mask on all internet connections so can play with Class C routing if need be... If I'm barking up the wrong tree with HSRC, is there another way I can acheive this without massive expenditure on Barracuda routers et al? Many Thanks.

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • Windows Server 2008R2 - can't change or remove the default gateway

    - by disserman
    We've installed VMWare Server 2.0 on Windows 2008R2. After some time playing with it (actually only removing host-only and nat networks, and binding adapters to the specified vmnets) we've noticed a strange problem: if you change or remove the default gateway on the network card, the server completely loses a network connection you can't ping it from the subnet, it also can't connect to anyone. When the gateway is removed and a server tries to connect to the other machines, I can see some incoming packets using a sniffer, but I believe they are damaged in some kind (I'm not a mega-guru in TCP/IP and can't find a mistake in a binary translation of the packet) because the other side doesn't respond. What we tried: removed vmware server using add/remove programs deleted everything related to the vmware server and all installed network adapters in the windows registry double checked for the vmware bridged protocol driver file, it's physically absent and no any links in the registry. performed a tcp/ip reset with netsh and disabled/enabled all network adapters in the device manager to recreate a registry keys for them. tried another network adapter. and the situation is the same: as soon you remove or change the default gateway, windows stops working. The total absurd of the situation is that the default gateway points to the non-existing IP. But when it's set, you can ping a server from the subnet, when you remove it - you can't. Any help? I'm starting thinking the new build of the VMWare Server is some kind of the malware... :)

    Read the article

  • Can ping IP address and nslookup hostname but cannot ping hostname

    - by jao
    On a windows 2003 server I can nslookup www.google.com which returns Server: localhost Address: 127.0.0.1 Non-authoritative answer: Name: www.l.google.com Addresses: 74.125.79.104, 74.125.79.147, 74.125.79.99 Aliases: www.google.com I can then ping 74.125.79.104: Pinging 74.125.79.104 with 32 bytes of data: Reply from 74.125.79.104: bytes=32 time=16ms TTL=54 Reply from 74.125.79.104: bytes=32 time=32ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Reply from 74.125.79.104: bytes=32 time=15ms TTL=54 Ping statistics for 74.125.79.104: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 15ms, Maximum = 32ms, Average = 19ms But I cannot ping www.google.com: Ping request could not find host www.google.com. Please check the name and try again. (this one is different from the other question in that this one has a TLD, it is not a local domain.) Update: I am running a dns server at localhost (127.0.0.1). Even when I change it to use for example opendns, it still can nslookup hostname and ping ip address, but not ping hostname. So what is wrong? Update 2: here is the ipconfig /all result: Windows IP Configuration Host Name . . . . . . . . . . . . : SERVER Primary Dns Suffix . . . . . . . : NETWORK.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : NETWORK.local Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-0F-1F-56-3B-AA DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.7.2 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.7.1 DNS Servers . . . . . . . . . . . : 127.0.0.1 Update 3: Thanks everyone for their help and suggestions. I appreciate that. Ipconfig /flushdns returns: Sucessfully flushed the DNS resolver cache Ipconfig /displaydns returns: 2.7.168.192.in-addr.arpa ---------------------------------------- Record Name . . . . . : 2.7.168.192.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : webserver.mydomainname.com 1.0.0.127.in-addr.arpa ---------------------------------------- Record Name . . . . . : 1.0.0.127.in-addr.arpa. Record Type . . . . . : 12 Time To Live . . . . : 0 Data Length . . . . . : 4 Section . . . . . . . : Answer PTR Record . . . . . : localhost Update 4: Wireshark shows the following: 3 11.540542 208.67.220.220 192.168.7.2 DNS Standard query response A 74.125.79.99 A 74.125.79.104 A 74.125.79.147 6 42.056794 192.168.7.2 192.168.7.255 NBNS Name query NB WWW.GOOGLE.COM<00> which is weird: when I ping, it sends a packet to 192.168.7.255 instead of asking the DNS server for an address

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly

    - by Eyla
    Greeting, I have Windows Server 2008 Server Core. I want to configure this server to host php websites using IIS 7. I installed and configured IIS7 to run php using the steps in this website: http://blogs.msdn.com/b/philpenn/archive/2009/07/19/deploying-iis-7-5-fastcgi-php-on-server-core.aspx Now I’m facing a problem that when I request my php website I would get this error. Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I check the even log and I found these details too: Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" could not be found. Please use sxstrace.exe for detailed diagnosis. I search about this error and I found a solution for it but which is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86). I installed but still I’m getting same error. Please help me to solve this problem and let me know if you want to know more info about my issue.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Exim log and send all mail for a given domain through another server

    - by Josh
    I administer a handful of shared web hosting servers. Recently, Yahoo has been deprioritizing/greylising all email sent from these servers. I am getting the dereaded 421 4.7.0 [TS02] Messages from my.ip.address temporarily deferred message from Yahoo and their postmaster has been unresponsive. I am unable to find any way to set up a feedback loop like AOL has for my IP address -- I did find a way to set up a feedback loop for a given domain, but we host hundreds of domains, and don't have the time to set up that many feedback loops. So what I'd like to do is twofold: Configure Exim to send all email destined to an @yahoo.com address to a relay, a new server which has an IP that yahoo is not blocking. Configure Exim (or maybe the relay) to log all emails sent to @yahoo.com, so I can review them and, in case one of my uses is violating ToS and sending SPAM to yahoo users, take the appropriate action. How could I accomplish these? Or, does anyone have any other advice for how to get mail to flow through Yahoo and ensure that any email generating complaints is brought to my attention? (For what it's worth, these servers are not listed on any major blacklists)

    Read the article

  • Login box not shown on Ubuntu

    - by Alexandre
    I've installed Ubuntu 10.04 (64bit) as a guest OS in VirtualBox, using Windows7 Professional (64bit) as host. After Ubuntu install, I did installed Xfce4 (sudo apt-get install xfce4). Logged in using a Xfce session, and when I logged out, I couldn't see the login box anymore, only the regular gnome background from login screen. Then I restarted the virtual machine, and now I'm not able to see the login box anymore, only the gnome background. Does someone knows how to solve this? Thanks in advance Update: I've tried to use Xubuntu, that comes with Xfce. And I'm facing the same problem. As a common denominator from the two cases, now I see the problem arises after I've updated the system, and then installed curl (via apt-get), zlib (make process), git (make process). But I'm not sure this can be the cause for this ... Update: I isolated the problem. The bad guy is zlib, but I don't have a clue why. In fact, it does crash not only ubuntu and xubuntu, but also Debian (Yes, I've tested all). I'll keep this question in case someone have the same problem, but I think my initial question is not valid anymore.

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • Can't access IIS 7 server URL from the same IIS 7 server.

    - by Kevin Raffay
    We have an intranet site ie, xxx.yyyy.com, that users access by entering "http"://xxx.yyy.com. Our problems started when we migrated to IIS 7 running on a new 2003 server. We got rid of our single-sign on code and implemented a security model where we capture a user's domain credentials which we then authenticate against a DB. In order to get the domain credentials passed to our ASP.NET app, we have the following settings: Anonymous Authentication:Disabled ASP.NET Impersonation: Enabled Basic/Digest/Forms Authentication: Disabled Windows Authentication: Enabled We allow "*" and deny "?" in the web.config. Browsing "http"://xxx.yyy.com from any client PC results in a domain login prompt, and if your enter a proper user/pwd, you can get in. However, browsing "http"://xxx.yyy.com while remoting into the server results in 3 domain login prompts and eventually a 401 error - unauthorized. We have traced this behavior to problems with our web site where we have pages doing "screen scraping" using the HttpRequest calling a url on the same server. When doing a HttpRequest from any other client, using a test harness that passes authorized credentials, all is good. So internal HttpRequest calls on the server fail, just like attempts to browse that server's url from within a remote session. Why would a to "http"://xxx.yyy.com on server xxx.yyy.com fail authentication?

    Read the article

  • Apache error: could not make child process 25105 exit, attempting to continue anyway

    - by Temnovit
    Hello! I have a web server based on Ubuntu Server 9.10 with this software: apache 2 PHP 5.3 MySQL 5 Python 2.5 Few of my websites are PHP based, few use python/django through mod_wsgi. For month or so, every day my apache server stops responding until I manually restart it. Error logs show: [Fri Mar 05 17:06:47 2010] [error] could not make child process 25059 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25061 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 24930 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25084 exit, attempting to continue anyway [Fri Mar 05 17:06:47 2010] [error] could not make child process 25105 exit, attempting to continue anyway and so on. I tried to google this problem but it seems, that I can't find a solution there. How can I determine the cause of this error and how do I fix it? Thank you for your help. UPDATE Updating mod-wsgi to version 3.1 didn't solve the problem Updating PHP to 5.3 also didn't solve it Here is a list of all installed modules: core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status mod_wsgi Here's how my virtual host with wsgi looks: <VirtualHost *:80> ServerName example.net DocumentRoot /var/www/example.net #wcgi script that serves all the thing WSGIScriptAlias / /var/www/example.net/index.wsgi WSGIDaemonProcess example user=wsgideamonuser group=root processes=1 threads=10 WSGIProcessGroup example Alias /static /var/www/example.net/static #serving admin files Alias /media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media/ <Location "/static"> SetHandler None </Location> <Location "/media"> SetHandler None </Location> ErrorLog /var/www/example.net/error.log </VirtualHost> Error log now contains two types of errors fallowed one by another: [error] child process 9486 still did not exit, sending a SIGKILL [error] could not make child process 9106 exit, attempting to continue anyway

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • Windows 2008 Server cannot access any network share

    - by Ramesh
    Hello friends I run a Windows 2008 server with SP2. This server acts as a desktop alone. Recently, I switched between two networks (corporate and other) using this system. Ever since, I am unable to access any network share on the original network from where I installed and configured the desktop. The message I get is "Network path was not found". Note that I am able to access the internet and my corporate mail server. I am told this is a Vista and Windows 2008 specific problem and I have done everything I could think of: a) Deleted the second network settings from the desktop b) Installed a patch from MS that supposed took care of this problem (with MS clearly saying they had not tested this enough) c) The SP2 install was after the problem occurred and I went ahead with it in the hope that SP2 may have something that would fix this Some additional details: a) A system admin can log into this system from a remote terminal b) I cannot get into my own system using the hidden share C$ - for instance \mymachine\C$ gives me the same message as above - Network path not found c) I can log into my system remotely using mstsc d) I cannot create shares on this system - as an extension network printers are not detected I have an update for you: The error message is as follows - **Network Error** Windows cannot access \\network_share Check the spelling of the name. Otherwise there might a problem with your network. To try to identify and resolve network problems, click Diagnose. Clicking Diagnose gives Error Code: 0x80070035 The network path was not found. Any help will be appreciated Thanks

    Read the article

  • SAN Replication for Fault tolerance using EVA4400

    - by Sergei
    Hi Everyone, I hope that someone would point me in the correct direction - it looks like I have no enough konwledge in the subject and timeframes are too tight for me to explore different scenarios in depth.. We have two datacenters few miles away from each other connected by 100 Mbps link.Each datacenter will have 5 BL490 blades with ESX Standard hosting about 50 VMs. Eac hsite has HP eva4400 SAN with SAN replication set up.VC is going to be in the first datacenter and both datacenter are networked. SAN Replication is block level so it seems like I cannot just replicate changes but all writes would have to be replicated.This should not be a problem as link can sustain about 1.8 TB a dayand data can be buffered. I am having trouble however visioning how recovery would work in this case.We don't need instant recovery , I would say 4 hours recovery time is accepted so fancy automatic SRM like DR scenario would not be easily accepted due to the financial reasons, however any comments are welcomed. Current idea is following: replicate LUNs from primary site to the secondary.When disaster strikes, IT personnel switches on ESX hosts on the remote side and connects replicated LUNS to them, then registers VMs and changes IP address. I understand that this seems like horribly manual process and I almost sure I have missed some obvious pitfalls here. Could someone let me know what direction should I go?An articles regarding the subject? This is a brand new setup and we would rather build up basic recovery process and scale it later.I just need to have a right direction to allow for such scalability. Thank you very much in advance!

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • DNAT to 127.0.0.1 with iptables / Destination access control for transparent SOCKS proxy

    - by cdauth
    I have a server running on my local network that acts as a router for the computers in my network. I want to achieve now that outgoing TCP requests to certain IP addresses are tunnelled through an SSH connection, without giving the people from my network the possibility to use that SSH tunnel to connect to arbitrary hosts. The approach I had in mind until now was to have an instance of redsocks listening on localhost and to redirect all outgoing requests to the IP addresses I want to divert to that redsocks instance. I added the following iptables rule: iptables -t nat -A PREROUTING -p tcp -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:12345 Apparently, the Linux kernel considers packets coming from a non-127.0.0.0/8 address to an 127.0.0.0/8 address as “Martian packets” and drops them. What worked, though, was to have redsocks listen on eth0 instead of lo and then have iptables DNAT the packets to the eth0 address instead (or using a REDIRECT rule). The problem about this is that then every computer on my network can use the redsocks instance to connect to every host on the internet, but I want to limit its usage to a certain set of IP addresses only. Is there any way to make iptables DNAT packets to 127.0.0.1? Otherwise, does anyone have an idea how I could achieve my goal without opening up the tunnel to everyone? Update: I have also tried to change the source of the packets, without any success: iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 1.2.3.4 -j SNAT --to-source 127.0.0.1 iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 127.0.0.1 -j SNAT --to-source 127.0.0.1

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • ipvsadm lists a few hosts by IP only, rest by name

    - by dmourati
    We use keepalived to manage our Linux Virtual Server (LVS) load balancer. The LVS VIPs are setup to use a FWMARK as configured in iptables. virtual_server fwmark 300000 { delay_loop 10 lb_algo wrr lb_kind NAT persistence_timeout 180 protocol TCP real_server 10.10.35.31 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.31" misc_timeout 30 } } real_server 10.10.35.32 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.32" misc_timeout 30 } } real_server 10.10.35.33 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.33" misc_timeout 30 } } real_server 10.10.35.34 { weight 24 MISC_CHECK { misc_path "/usr/local/sbin/check_php_wrapper.sh 10.10.35.34" misc_timeout 30 } } } http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.fwmark.html [root@lb1 ~]# iptables -L -n -v -t mangle Chain PREROUTING (policy ACCEPT 182G packets, 114T bytes) 190M 167G MARK tcp -- * * 0.0.0.0/0 w1.x1.y1.4 multiport dports 80,443 MARK set 0x493e0 62M 58G MARK tcp -- * * 0.0.0.0/0 w1.x1.y2.4 multiport dports 80,443 MARK set 0x493e0 [root@lb1 ~]# ipvsadm -L IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn FWM 300000 wrr persistent 180 -> 10.10.35.31:0 Masq 24 1 0 -> dis2.domain.com:0 Masq 24 3 231 -> 10.10.35.33:0 Masq 24 0 208 -> 10.10.35.34:0 Masq 24 0 0 At the time the realservers were setup, there was a misconfigured dns for some hosts in the 10.10.35.0/24 network. Thereafter, we fixed the DNS. However, the hosts continue to show up as only their IP numbers (10.10.35.31,10.10.35.33,10.10.35.34) above. [root@lb1 ~]# host 10.10.35.31 31.35.10.10.in-addr.arpa domain name pointer dis1.domain.com. OS is CentOS 6.3. Ipvsadm is ipvsadm-1.25-10.el6.x86_64. kernel is kernel-2.6.32-71.el6.x86_64. Keepalived is keepalived-1.2.7-1.el6.x86_64. How can we get ipvsadm -L to list all realservers by their proper hostnames?

    Read the article

  • Modem doesn't seem to pass internet connection to router

    - by Vian Esterhuizen
    I was supplied a Cisco DPC3825 modem by my ISP and use a Cisco E4200 as my main router. Today the Internet just stopped working, just out of the blue, I can't think of anything that would have triggered it. It's been working pretty good for the past month except for a few random blips here and there. After some trouble shooting I realized that a direct wired connection to the modem would get me Internet access but if I was wired to the router as I was before I would have no connection. Assuming it was router I connected a TP-Link WR841N but it had the exact same problem. Also, connecting to the router via Wi-Fi from multiple devices will connect me to the router but I still can't get access to the Internet. From these test, it seems that the modem just won't send Internet through to the router but is clearly connecting and able to directly connect to my PC. What I've Tried A full reset and factory reset on the E4200 A full reset on the modem (but I believe my ISP has remote access to the modem because the passwords and so on are always set back). My ISP has remotely reconfigured the modem What I Should Try What else can I try? What should I do to try to narrow down the issue? Can you figure out what the problem might be based on this information?

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • Apache Reverse Proxy not working inside a VirtualHost running a Mono Web Application

    - by Arwen
    I have a mono web application running with this virtual host below. It is running on Apache 2.2.20 / Ubuntu 11.10. I tried to add a reverse proxy inside this virtualhost so I can make asynchronous or AJAX type calls back to this same domain. My asynchronous requests would have problems in many browsers calling services that are on another domain (cross domain requests problem). I am wanting to do reverse proxy calls to this other service using http://www.whatever.com/monkey/. So, I added the directive and top directive to try to make this work. It is weird though...nothing I do seems to have any effect. I can put the exact same markup in my default website virtualhost file and it works great. What is the deal? Are some of these Mono directives causing problems? <VirtualHost *:80> ServerName www.whatever.com ServerAlias whatever.com *.whatever.com ServerAdmin [email protected] DocumentRoot /home/myuser/web/whatever ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> <Location /monkey/> ProxyPass http://www.google.com/ ProxyPassReverse http://www.google.com/ </Location> MonoServerPath www.whatever.com "/usr/bin/mod-mono-server2" MonoSetEnv www.whatever.com MONO_IOMAP=all MonoApplications www.whatever.com "/:/home/myuser/web/whatever" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias www.whatever.com SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost>

    Read the article

  • Phpmyadmin location for nginx

    - by multiformeinggno
    I installed nginx and phpmyadmin. I set up a domain with these parameters to test phpmyadmin: server { listen 80; server_name domain.com; root /usr/share/phpmyadmin; index index.php; fastcgi_index index.php; location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } } And everything works properly (if I visit the domain I can login to phpmyadmin). The problem is that it was just for testing phpmyadmin, now I'd like to move this to my 'default' site. But I can't figure out how to have it on /phpmyadmin. Here's the config for the 'default' nginx site (where I'd like to put this /phpmyadmin location): server { server_name blabla; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log; root /var/www/default; index index.php index.html; location / { try_files $uri $uri/ index.php; } location ~ \.php$ { include /etc/nginx/fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; } ### NginX Status location /nginx_status { stub_status on; access_log off; } ### FPM Status location ~ ^/(status|ping)$ { fastcgi_pass unix:/var/run/php5-fpm.sock; access_log off; } }

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >