Search Results

Search found 16208 results on 649 pages for 'pass community'.

Page 515/649 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Apache 2.4 with PHP-FPM

    - by tubaguy50035
    I'm trying to setup Apache 2.4 with PHP-FPM 5.4 using the new modules with Apache 2.4. The following is what I have currently in my virtual host file: <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www #Directory permissions <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I have PHP-FPM running using Unix sockets with a sock file located at /var/run/php5-fpm.sock. How do I proxy my requests to this sock file? I've seen some sites say to use ProxyPassMatch and others are saying Rewrite Rule. Are there pros or cons on either side? Also, most sites I'm seeing are showing ProxyPassMatch with a regex to only pass .php files. Could I also send it .html files? For whatever reason, we have a ton of PHP inside .html files. Edit: As noted in the comments, it looks like mod_proxy_fcgi doesn't support Unix sockets. Is there another module I should be using?

    Read the article

  • How to have PHP and mod_wsgi python app on the same domain?

    - by Lazik
    I am using apache with mod_wsgi (python3) on ubuntu 12.04. I have a python app (bottle) which is at www.mysite.com/ In my python app I have routes like www.mysite.com/abbb?q=blab I would like a path www.mysite.com/forum to resolve to a php app (simple machine forums) Ideally I would like to use apache to handle the forum part and pass it to php (instead of coding it in the python app). Don't know if it's possible. I'm new to this, I have read https://code.google.com/p/modwsgi/wiki/ConfigurationGuidelines#The_Apache_Alias_Directive but I don't understand how to use it. Here is my apache conf for the mod_wsgi app, I don't know how to specify the PHP portion. <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Single m0n0wall - Two LAN Subnets - How To Setup

    - by SnAzBaZ
    I have two LAN subnets that I need to link together they are 192.168.4.0/24 and 192.168.5.0/24 There is a m0n0wall running on 192.168.4.1. It's LAN connection goes out to our network switch, and it's WAN port goes out to our ADSL modem. WAN is connected via PPPoE. The 192.168.4.0 subnet contains all of our office workstations. The 192.168.5.0 subnet contains development servers and test machines that need to obtain internet access and be "managed" by computers on the 192.168.4.0 subnet, but need to be on their own subnet as well. I have a Draytek 2820N configured on 192.168.5.1 with it's WAN2 port configured as 192.168.4.25 and a default gateway of 192.168.4.1. Machines on the 5.0 subnet can connect to the internet via the m0n0wall just fine. I configured a static route on the m0n0wall LAN interface, Network 192.168.5.0/24 and Gateway 192.168.4.25. Machines on the 5.0 subnet can ping machines on the 4.0 network but the reverse does not work. I configured a new firewall rule on the m0n0wall that allows any traffic on the LAN interface with a source IP of 192.168.4.25 to be allowed. The DrayTek firewall is currently configured to pass all traffic regardless. When I try to ping a machine in the 5.0 subnet from 4.0 I see this in my m0n0wall log: BLOCK 14:45:27.888157 LAN 192.168.4.25 192.168.4.37, type echoreply/0 ICMP So the reply is being sent from the 5.0 subnet but is not being allowed to reach my workstation because the firewall is blocking it. Why is the firewall blocking it ? I hope the explanation of my network is clear, please ask if you require further clarification. Thank you.

    Read the article

  • Permission Denied for FTP User

    - by Alasdair
    I have an FTP user whose default is /root/ftpuser This user can login fine. The user is the owner of the directory & the directory is even set to 777 permissions. But the user can't upload anything, the display is: Status: Connecting to xx.xxx.xxx.xx:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 2 of 50 allowed. Response: 220-Local time is now 05:12. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER ftpuser Response: 331 User ftpuser OK. Password required Command: PASS ********* Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Starting upload of test.html Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: MKD / Response: 550 Can't create directory: Permission denied Command: CWD / Response: 550 Can't change directory to /: Permission denied Command: SIZE /btn.png Response: 550 Can't check for file existence Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (66,232,106,33,52,218) Command: STOR /test.html Response: 553 Can't open that file: Permission denied Error: Critical file transfer error It's a Linux CentOS 6 server. Any ideas?

    Read the article

  • dom0 enable IPv6 for guests

    - by user98651
    I am looking at deploying IPv6 to my virtual machines. Right now I have v6 working great on the dom0 using a 6in4 provided by Hurricane Electric as I do not have native v6. However, I would like to distribute some of the /48 I am receiving to the domUs (/64 per machine would be ideal, but I am open to your suggestions). Static configuration on the domU side is fine. All I want to accomplish is getting the traffic to pass through the dom0 to the domU. To say the least, I'm still trying to wrap my head around all the virtual interfaces and bridges Xen creates. Yes, I have Googled around for this a bit and have not found anything great. I tried using two "vif-route6" bash scripts with no luck (possibly due to my ignorance with Xen networking). I am still stuck (mainly in how to configure the dom0). I would like to imagine this problem is relatively easy to solve and I look forward to your suggestions and help! Edited post to clarify my end goal: getting IPv6 to domU guests. I am completely open to suggestions but am hoping for something other than setting up a tunnel for every guest.

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • SSLVerifyClient optional with location-based exceptions

    - by Ian Dunn
    I have a site that requires authentication in order to access certain directories, but not others. (The "directories" are really just rewrite rules that all pass through /index.php) In order to authenticate, the user can either login with a standard username/password, or submit a client-side x509 certificate. So, Apache's vhost conf looks something like this: SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient none SSLVerifyDepth 1 <LocationMatch "/(foo-one|foo-two|foo-three)"> SSLVerifyClient optional </LocationMatch> That works fine, but then large file uploads fail because of the behavior documented in bug 12355. The workaround for that is to set SSLVerifyClient require (or optional) as the default, so now the conf looks like this SSLCACertificateFile /etc/pki/CA/certs/redacted-ca.crt SSLOptions +ExportCertData +StdEnvVars SSLVerifyClient optional SSLVerifyDepth 1 <LocationMatch "/(bar-one|bar-two|bar-three)"> SSLVerifyClient none </LocationMatch> That fixes the upload problem, but the SSLVerifyClient none doesn't work for bar-one, bar-two, etc. Those directories are still prompted to present a certificate. Additionally, I also need the root URL to accessible without the user being prompted for a certificate. I'm afraid that will cancel out the workaround, though.

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • Wired to wireless bridge in Linux

    - by adrianmcmenamin
    I am attempting to set up my Raspberry Pi as a bridge (but I think this is not a question specific to the hardware) - using Debian wheezy. I have a hostapd.conf: (some details changed for security)... interface=wlan0 bridge=br0 driver=nl80211 auth_algs=1 macaddr_acl=0 ignore_broadcast_ssid=0 logger_syslog=-1 logger_syslog_level=0 hw_mode=g ssid=MY_SSID channel=11 wep_default_key=0 wep_key0=MY_KEY wpa=0 (yes, I know WEP is no good) And this in /etc/network/interfaces auto lo iface lo inet loopback iface eth0 inet dhcp allow-hotplug wlan0 iface wlan0 inet manual wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf iface default inet dhcp auto br0 iface br0 inet dhcp bridge-ports eth0 wlan0 Everything seems to come up ok, but I cannot associate with the bridged wireless connection - even though the flashing lights on the USB stick suggest packets are being exchanged. I have read somewhere that not all cards/devices will run in hostap mode - they won't pass packets in one direction: is that right? (The info was a bit old)- this my card: [ 3.663245] usb 1-1.3.1: new high-speed USB device number 5 using dwc_otg [ 3.794187] usb 1-1.3.1: New USB device found, idVendor=0cf3, idProduct=9271 [ 3.804321] usb 1-1.3.1: New USB device strings: Mfr=16, Product=32, SerialNumber=48 [ 3.816994] usb 1-1.3.1: Product: USB2.0 WLAN [ 3.823790] usb 1-1.3.1: Manufacturer: ATHEROS [ 3.830645] usb 1-1.3.1: SerialNumber: 12345 So, what have I got wrong here?

    Read the article

  • Persistent PuTTY sessions for multiple windows

    - by Tgr
    I'm working in various Linux environments through PuTTY connections which break from time to time. I'm looking for a solution to make the PuTTY windows persist (e.g. if I was editing a file, then after reconnecting I should be in the same editor with the same file open at the same place), with the following requirements: it shouldn't require any manual setup at the beginning of the session or after reconnection (I don't want to type in screen or anything like that) I have several windows open to the same machine with the same user, which tend to disconnect at the same time the number/role of windows is not constant (it's not like I have an mc window, a mysql window and a "script runner" window; sometimes I use one window for search or for SVN commands, other times I need several at the same time) sometimes I need to change the properties of the windows for a task (large window for grepping/editing, small windows because I need to see two of them at the same time, red background because I am modifying the live database in MySQL etc), so I need to get the same console back in the same window after a reconnect Is there a way to achieve this? I suppose I should use screen or something equivalent, but how does it know which window I am reconnecting from? Is there some way to pass a unique window identifier to the shell from PuTTY?

    Read the article

  • OpenBSD pf - implementing the equivalent of an iptables DNAT

    - by chutz
    The IP address of an internal service is going to change. We have an OpenBSD access point (ssh + autpf rules) where clients connect and open a connection to the internal IP. To give us more time to reconfigure all clients to use the new IP address, I thought we can implement the equivalent of a DNAT on the authpf box. Basically, I want to write a rule similar to this iptables rule which lets me ping both $OLD_IP and $NEW_IP. iptables -t nat -A OUTPUT -d $OLD_IP -j DNAT --to-dest $NEW_IP Our version of OpenBSD is 4.7, but we can upgrade if necessary. If this DNAT is not possible we can probably do a NAT on a firewall along the way. The closest I was able to accomplish on a test box is: pass out on em1 inet proto icmp from any to 10.68.31.99 nat-to 10.68.31.247 Unfortunately, pfctl -s state tells me that nat-to translates the source IP, while I need to translate the destination. $ sudo pfctl -s state all icmp 10.68.31.247:7263 (10.68.30.199:13437) -> 10.68.31.99:8 0:0 I also found lots of mentions about rules that start with rdr and include the -> symbol to express the translation, but it looks like this syntax has been obsoleted in 4.7 and I cannot get anything similar to work. Attempts to implement a rdr fail with a complaint that /etc/pf.conf:20: rdr-to can only be used inbound

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • TLS_REQCERT and PHP with LDAPS

    - by John
    Problem: Secure LDAP queries via command-line and PHP to an AD domain controller with a self-signed certificate. Background: I am working on a project where I need to enable LDAP look-ups from a PHP web application to a MS AD domain controller that is using a self-signed certificate. This self-signed certificate is also using a domain name that is not a FQDN - think of something like people.campus as the domain name. The web application would take the user's credentials and pass them on to the AD domain controller to verify if the credntials are a match or not. This seems simple, but I am having problems trying to get PHP and the self-signed certificate to work. Some people have suggested that I changed the TLS_REQCERT variable from "request" to "never" within the OpenLDAP configuration. I am concerned that this might have larger implications such as a man-in-the-middle attack and I am not comfortable changing this setting to never. I have also read some places online where one can take a certificate and place it as a trusted source within the openldap configuration file. I am curious if that is something that I could do for the situation that I have? Can I, from the command line, obtain the self-signed certificate that the AD domain controller is using, save it to a file, and then have openldap use that file for the trust that it needs so that I do not need to adjust the variable from request to never? I do not have access to the AD domain controller and as a result cannot export the certificate. If there is a way to obtain the certificate from the command line, what commands do I need to use? Is there an alternate method of handling this issue that would be better in the long run? I have some CentOS servers and some Ubuntu servers that I am working with to try and get this going on. Thanks in advance for your help and ideas.

    Read the article

  • Git clone/pull across local network

    - by Tom Sarduy
    I'm trying to clone/pull a repository in another PC using Ubuntu Quantal. I have done this on Windows before but I don't know what is the problem on ubuntu. I tried these: git clone file:////pc-name/repo/repository.git git clone file:////192.168.100.18/repo/repository.git git clone file:////user:pass@pc-name/repo/repository.git git clone smb://c-pc/repo/repository.git git clone //192.168.100.18/repo/repository.git Always I got: Cloning into 'intranet'... fatal: '//c-pc/repo/repository.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly or fatal: repository '//192.168.100.18/repo/repository.git' does not exist More: The other PC has username and password Is not networking issue, I can access and ping it. I just installed git doing apt-get install git (dependencies installed) I'm running git from the terminal (I'm not using git-shell) What is causing this and how to fix this? Any help would be great! UPDATE I have cloned the repo on Windows using git clone //192.168.100.18/repo/intranet.git without problems. So, the repo is accessible and exist! Maybe the problem is due user credentials?

    Read the article

  • Discrepancy in file size on disk and ls output

    - by smokinguns
    I have a script that checks for gzipped file sizes greater than 1MB and outputs files along with their sizes as a report. This is the code: myReport=`ls -ltrh "$somePath" | egrep '\.gz$' | awk '{print $9,"=>",$5}'` # Count files that exceed 1MB oversizeFiles=`find "$somePath" -maxdepth 1 -size +1M -iname "*.gz" -print0 | xargs -0 ls -lh | wc -l` if [ $oversizeFiles -eq 0 ];then status="PASS" else status="CHECK FAILED. FOUND FILES GREATER THAN 1MB" fi echo -e $status"\n"$myReport The problem is that ls command outputs the files sizes as 1.0MB in the report but the status is "FAIL" as "$oversizeFiles" variable's value is 2. I checked the file sizes on disk and 2 files are 1.1MB. Why this discrepancy? How should I modify the script so that I can generate an accurate report? BTW, I'm on a Mac. Here is what man page for "find" says on my Mac OSX: -size n[ckMGTP] True if the file's size, rounded up, in 512-byte blocks is n. If n is followed by a c,then the primary is true if the file's size is n bytes (characters). Similarly if n is followed by a scale indicator then the file's size is compared to n scaled as: k kilobytes (1024 bytes) M megabytes (1024 kilobytes) G gigabytes (1024 megabytes) T terabytes (1024 gigabytes) P petabytes (1024 terabytes)

    Read the article

  • How to automate downloading files?

    - by Damon
    I got a book which had a pass to access digital versions of hi-res scans of much of the artwork in the book. Amazing! Unfortunately the presentation of all the these are 177 pages of 8 images each with links to zip files of jpgs. It is extremely tedious to browse, and I would love to be able to get all the files at once rather than sitting and clicking through each one separately. archive_bookname/index.1.htm - archive_bookname/index.177.htm each of those pages have 8 links each to the files linking to files such as <snip>/downloads/_Q6Q9265.jpg.zip, <snip>/downloads/_Q6Q7069.jpg.zip, <snip>/downloads/_Q6Q5354.jpg.zip. that don't quite go in order. I cannot get a directory listing of the parent /downloads/ folder. Also, the file is behind a login-wall, so doing a non-browser tool, might be difficult without knowing how to recreate the session info. I've looked into wget a little but I'm pretty confused and have no idea if it will help me with this. Any advice on how to tackle this? Can wget do this for me automatically?

    Read the article

  • A little guidance setting up FTP server authentication on Windows Server 2008 R2 standard?

    - by Ropstah
    I have a (clean) server running Windows Server 2008 R2 standard. I would just like to use it for serving a website and a FTP server through IIS. IIS is installed and serves my website propery. I have now added a FTP site but when I try to logon using my user/pass i get the following error: 530 User cannot login From this article (http://support.microsoft.com/kb/200475) I understand that these four causes can be pointed out: The Allow only anonymous connections security setting has been turned on in the Microsoft Management Console (MMC). Not the case The username does not have the Log on locally permission in User Manager. The user is in the Users group, however I'm not able to logon through RDP. I tried configuring this by following this article through GPMC however this only works when I'm logged in as a domain user on a domain controller which I'm not: I'm logged in as administrator The username does not have the Access this computer from the network permission in User Manager. Not sure what this implies...? The Domain Name was not specified together with the username (in the form of DOMAIN\username). Tried adding the server name: server\username, not working... I am an absolute server noob and I'd just like to be able to connect through FTP... Any guidance is highly appreciated!

    Read the article

  • Variable TTL inside a LAN

    - by user140783
    I recently discovered that ping my local router, returns different TTL values??. The ping 3 switch must pass through before reaching the router, there may be the problem? 192.168.1.99 is the IP of my router , a Cisco WRT120N Thank you! Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo=29ms TTL=3 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=131 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=111 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=240 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=51 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=66 Traceroute G:\Documents and Settings\Administrador>tracert 192.168.1.99 Traza a la dirección maxi2011 [192.168.1.99] sobre un máximo de 30 saltos: 1 <1 ms <1 ms <1 ms maxi2011 [192.168.1.99] Traza completa. G:\Documents and Settings\Administradorping 192.168.1.99 Haciendo ping a 192.168.1.99 con 32 bytes de datos: Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=190 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Respuesta desde 192.168.1.99: bytes=32 tiempo<1m TTL=117 Estadísticas de ping para 192.168.1.99: Paquetes: enviados = 4, recibidos = 4, perdidos = 0 (0% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms G:\Documents and Settings\Administrador

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >