Search Results

Search found 6882 results on 276 pages for 'ftp proxy'.

Page 145/276 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Funnelling http traffic

    - by spencer p
    I have a situation where a large batch of servers (X), on demand, need to request data from a smaller set of web servers (Y). The worst case scenario is if all servers in X decide to fetch different requests to one server in Y. That would be X amount of connections, which could be a very large burst of traffic. The best case scenario is if 1 server in X hit 1 server in Y in tandem. Life does not work like this. One idea to entertain is placing a proxy, similar to squid between X and Y. All of X servers can connect to this proxy, but would result in a few persistent (http keepalive) connections to Y. If The few were say, 3 or 4, then it would funnel. If we could then rate limit those connections and traffic decides to spike unusually high, we wouldn't hurt anyone but ourselves. Thoughts?

    Read the article

  • Visual Studio 2010 ClickOnce prerequisites from same location

    - by muhan
    I'm using clickonce publishing and want to require .net 3.5 framework and others as prerequisites. I have selected the option to download the prerequisites from same folder as my app. I've also placed the .net 3.5 redistributable exe in the folder on the server where the app will be published. I publish by FTP over the internet to the server where the users are. However, VS will not let me publish saying it can't find the prerequisites on disk. Does this mean I have to have the prerequisites installed somewhere on my developer machine and that those files will all be uploaded by FTP to the server everytime I publish a new version to the server? That would be a huge amount of data to upload over my slow DSL upload link. Any insight?

    Read the article

  • problems with url and email regex when searching text

    - by Grant Collins
    Hi, I'm having problems with regular expressions that I got from regexlib. I am trying to do a preg_replace() on a some text and want to replace/remove email addresses and URLs (http/https/ftp). The code that I am have is: $sanitiseRegex = array( 'email' => /'^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$/', 'http' => '/^(http|https|ftp)\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(:[a-zA-Z0-9]*)?/?([a-zA-Z0-9\-\._\?\,\'/\\\+&amp;%\$#\=~])*$/', ); $replace = array( 'xxxxx', 'xxxxx' ); $sanitisedText = preg_replace($sanitiseRegex, $replace, $text); However I am getting the following error: Unknown modifier '/' and $sanitisedText is null. Can anyone see the problem with what I am doing or why the regex is failing? Thanks

    Read the article

  • DatabaseName.bak File Transfer Problem

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm Later when I tried to restore file on server it was restored correctly, but when I tried to transfer same file using FTP Software FileZila and tried to restore that downloaded file it was giving above error. That is file is getting corrupted on the way to tranfer using FTP. Any Idea how can i avoid that file to get courrupt? Note: File is downloaded using FileZila - TransferType as Binary. Thank you.

    Read the article

  • Cant access websites

    - by LiveEn
    Recently i have some problems accessing websites. When i try to access it says This webpage is not available. I tried accessing the site through FireFox, Internet Explorer and Chrome. I also tried with using a web proxy but still the same problem. This problem is only in my Desktop PC, all the websites works fine in my laptop Currently i cant access, yahoo.com download.com bing.com proxy.org daniweb.com aol.com and many forums I check with the host file but nothing is blocked in that. Can some one please suggest what is wrong?? Thanks

    Read the article

  • Best practice for web server user/group permissions

    - by Poe
    What's the best practice in a secure manner to setup the user/group and permissions? Here's what we currently have; web server runs as www/www. Fastcgi Php runs as www/www. User's shell/ftp account is username/username. We want the user to be able to have full access to all files, including those created by the web server 'www' from the shell or ftp. Similarly, we want the scripts run by fastcgi/php to be able to create files in user created directories and modify user created files.

    Read the article

  • Remove Windows 7's limitation on number of concurrent tcp connections (http web requests)

    - by Ghita
    I have an application that tries to open as many http requests as possible (in order to stress test a proxy implementation) It seems to me that Win7 (SP1) may have a limitation on number of concurrent opened connection (it may be the so called half-open state if I'm not wrong). Is there something I can do for client ? and also I test using a vista PC that acts as a proxy server. It would be great if I could configure it to sustain at least 50 new connections initiated / second on client side and many more on server. I made the modification according to this technet article by setting TcpNumConnections = 150 but it doesn't make a difference. I still only see about 20 tcp sockets associated with my http client by using tcpview.

    Read the article

  • Can't access to access to my web server inside a network with Firewall on

    - by ianenri
    I set up a Web server with the following: There is the Internet Router, configured to Port Foward port 80 to my computer assigned to my PC's IP: 192.168.1.128 My PC is connected to that wireless router from wlan0 Then, my PC is also connected to my device (which is the webserver) with a crossover-cable usign eth0 having this anohter IP: 10.42.43.1 Finally, my device (the webserver) is connected with eth0 with this IP: 10.42.43.55 As you can see, I need to install a reverse-proxy to be able to resolve to my device's webserver. I installed pound (proxy server) in my PC and configured properly to make 192.168.1.128 resolve to 10.42.43.55 So, I just typing my ISP provided IP 200.x.x.x resolves to my device webserver. But there's a problem: I HAD TO STOP MY FIREWALL. I don't know how I need exactly configure the firewall in SUSE YAST2, or at least iptables. Stopping it is not an option, not for security reasons, just because there's port fowarding rule that is needed to give Internet access to my device too. I'm using openSUSE 12.1

    Read the article

  • Compiling Apache 2.2.11 on AIX 6.1, .so files not genereated

    - by user176514
    I am compiling Apache 2 (2.2.11 yeh, Its old...but its a requirement) on AIX 6.1 with GCC 4.2.0. I am using the configure options: ./configure \ --enable-module=rewrite\ --enable-module=log_referer\ --with-included-apr \ --enable-proxy \ --enable-ssl=shared \ --with-ssl=/usr \ --prefix=/PATH/apache \ --enable-so \ --enable-mods-shared="proxy proxy_http proxy_connect headers mod_proxy mod_ssl" The configure, followed by the make/make install processes all run without error of any kind. However, when I look int he modules directory for the /PATH/modules directory there are no .so files created. Sadly because of the nature of what I am doing, and the business I am in, I am locked into the software versions as described.

    Read the article

  • FtpWebResponse and StreamReader - specifying an offset

    - by AJ
    Hi, I am using the FtpWebRequest / FtpWebResponse objects in C# to download files from a server - so far, so good. I create a StreamReader object from the response stream and use a StreamWriter to create a local file. Now, the file I am reading happens to be in a very simple 'archive' format - there is a small TOC at the start of the file followed by the actual file data. I can therefore read the TOC and get a file offset and size of the data I want to download. My question is: Supposing the offset is 1024. I would use StreamReader.Read(buffer, 1024, length), but will .NET and the FTP protocol actually allow me to skip bytes 0-1023, or does the reader still go through the (relatively) slow process of downloading and discarding the bytes I don't need? This may make the difference between whether I want to use a single archive file, or a TOC file with the data files stored separately. As a bit of a secondary question, would my mileage vary using the Http classes instead of Ftp? Cheers, Adam

    Read the article

  • just another apache to nginx rewrite question

    - by Brandon
    I have the following Apache rewrite directives: RewriteCond %{REQUEST_URI} ^/proxy(/|$) [NC] RewriteCond %{QUERY_STRING} (^|&)uri=(.*?)(&|$) [NC] RewriteRule .* /api/vs1.0/%2 [NC,L] And I'm trying out nginx, so trying to move the rewrites over. I came up with... rewrite ^/proxy(/|$) /api/vs1.0/$2 last; rewrite (^|&)uri=(.*?)(&|$) /api/vs1.0/$2 last; Which is probably grossly incorrect. I'm just a mere web developer, so I was wondering if anyone could lend a hand here. I would be much obliged. I see that I am ignoring the query string specification, but I'm thinking that it shouldn't matter. I only have a vague idea of what the original rewrite is accomplishing, so I haven't much hope here in coming up with something decent, despite reading the relevant documentation for both servers.

    Read the article

  • How to use switch statement with Enumerations C#

    - by Maximus Decimus
    I want to use a switch statement in order to avoid many if's. So I did this: public enum Protocol { Http, Ftp } string strProtocolType = GetProtocolTypeFromDB(); switch (strProtocolType) { case Protocol.Http: { break; } case Protocol.Ftp: { break; } } but I have a problem of comparing an Enum and a String. So if I added Protocol.Http.ToString() there is another error because it allows only CONSTANT evaluation. If I change it to this switch (Enum.Parse(typeof(Protocol), strProtocolType)) It's not possible also. So, it's possible to use in my case a switch statement or not?

    Read the article

  • outlook iptables configuration [update]

    - by mediaexpert
    I've a Debian mail server, but only the outlook users can't be able to download the emails. I've seen a lot of post about some kind of forwarding port configuration, I've tried some commands, but I don't be able to solve this problem, please help me. [LAST UPDATE] I find a lot of TIME WAIT on ipv6 netstat tcp6 0 0 my.mailserver.it:imap2 200-62-245-188.ip2:17060 TIME_WAIT - below some config files: pop3d I think the problem was here ##NAME: POP3AUTH:1 # # To advertise the SASL capability, per RFC 2449, uncomment the POP3AUTH # variable: # # POP3AUTH="LOGIN" # # If you have configured the CRAM-MD5, CRAM-SHA1 or CRAM-SHA256, set POP3AUTH # to something like this: # # POP3AUTH="LOGIN CRAM-MD5 CRAM-SHA1" POP3AUTH="" ##NAME: POP3AUTH_ORIG:1 # # For use by webadmin POP3AUTH_ORIG="PLAIN LOGIN CRAM-MD5 CRAM-SHA1 CRAM-SHA256" ##NAME: POP3AUTH_TLS:1 # # To also advertise SASL PLAIN if SSL is enabled, uncomment the # POP3AUTH_TLS environment variable: # # POP3AUTH_TLS="LOGIN PLAIN" POP3_TLS_REQUIRED = 0 POP3AUTH_TLS="" ##NAME: POP3AUTH_TLS_ORIG:0 # # For use by webadmin POP3AUTH_TLS_ORIG="LOGIN PLAIN" ##NAME: POP3_PROXY:0 # # Enable proxying. See README.proxy # # For use by webadmin POP3AUTH_TLS_ORIG="LOGIN PLAIN" ##NAME: POP3_PROXY:0 # # Enable proxying. See README.proxy POP3_PROXY=0 ##NAME: PROXY_HOSTNAME:0 # # Override value from gethostname() when checking if a proxy connection is # required. # PROXY_HOSTNAME= ##NAME: PORT:1 ##NAME: PROXY_HOSTNAME:0 # # Override value from gethostname() when checking if a proxy connection is # required. # PROXY_HOSTNAME= ##NAME: PORT:1 # # Port to listen on for connections. The default is port 110. # # Multiple port numbers can be separated by commas. When multiple port # numbers are used it is possibly to select a specific IP address for a # given port as "ip.port". For example, "127.0.0.1.900,192.68.0.1.900" # accepts connections on port 900 on IP addresses 127.0.0.1 and 192.68.0.1 # The ADDRESS setting is a default for ports that do not have a specified # IP address. # Port to listen on for connections. The default is port 110. # # Multiple port numbers can be separated by commas. When multiple port # numbers are used it is possibly to select a specific IP address for a # given port as "ip.port". For example, "127.0.0.1.900,192.68.0.1.900" # accepts connections on port 900 on IP addresses 127.0.0.1 and 192.68.0.1 # The ADDRESS setting is a default for ports that do not have a specified # IP address. PORT=110 ##NAME: ADDRESS:0 # # IP address to listen on. 0 means all IP addresses. ADDRESS=0 ##NAME: TCPDOPTS:0 # ##NAME: ADDRESS:0 # # IP address to listen on. 0 means all IP addresses. ADDRESS=0 ##NAME: TCPDOPTS:0 # # Other couriertcpd(1) options. The following defaults should be fine. # TCPDOPTS="-nodnslookup -noidentlookup" ##NAME: LOGGEROPTS:0 # # courierlogger(1) options. # LOGGEROPTS="-name=pop3d" ##NAME: DEFDOMAIN:0 # # Optional default domain. If the username does not contain the # first character of DEFDOMAIN, then it is appended to the username. # If DEFDOMAIN and DOMAINSEP are both set, then DEFDOMAIN is appended # only if the username does not contain any character from DOMAINSEP. # You can set different default domains based on the the interface IP # address using the -access and -accesslocal options of couriertcpd(1). DEFDOMAIN="@interzone.it" ##NAME: POP3DSTART:0 # # POP3DSTART is not referenced anywhere in the standard Courier programs # or scripts. Rather, this is a convenient flag to be read by your system # startup script in /etc/rc.d, like this: # # . /etc/courier/pop3d DEFDOMAIN="@mydomain.com" ##NAME: POP3DSTART:0 # # POP3DSTART is not referenced anywhere in the standard Courier programs # or scripts. Rather, this is a convenient flag to be read by your system # startup script in /etc/rc.d, like this: # # . /etc/courier/pop3d # case x$POP3DSTART in # x[yY]*) # /usr/lib/courier/pop3d.rc start # ;; # esac # # The default setting is going to be NO, until Courier is shipped by default # with enough platforms so that people get annoyed with having to flip it to # YES every time. # x[yY]*) # /usr/lib/courier/pop3d.rc start # ;; # esac # # The default setting is going to be NO, until Courier is shipped by default # with enough platforms so that people get annoyed with having to flip it to # YES every time. POP3DSTART=YES ##NAME: MAILDIRPATH:0 # # MAILDIRPATH - directory name of the maildir directory. # MAILDIRPATH=.maildir iptables Chain INPUT (policy DROP 20 packets, 1016 bytes) pkts bytes target prot opt in out source destination 60833 16M ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 state NEW,ESTABLISHED 18970 971K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spts:1024:65535 dpt:110 state NEW,ESTABLISHED Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 192.168.1.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 pop3d.cnf RANDFILE = /usr/lib...pop3d.rand [req] default_bits = 1024 encrypt_key = yes distinguidhed_name = req_dn x509_extensions = cert_type prompt = no [req_dn] C=US ST=NY L= New York O=Courier Mail Server OU=Automatically-generated POP3 SSL key CN=localhost [email protected] [cert_type] nsCertType = server

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • How can I persist a large Perl object for re-use between runs?

    - by Alnitak
    I've got a large XML file, which takes over 40 seconds to parse with XML::Simple. I'd like to be able to cache the resulting parsed object so that on the next run I can just retrieve the parsed object and not reparse the whole file. I've looked at using Data::Dumper but the documentation is a bit lacking on how to store and retrieve its output from disk files. Other classes I've looked at (e.g. Cache::Cache appear designed for storage of many small objects, not a single large one. Can anyone recommend a module designed for this? EDIT. The XML file is ftp://ftp.rfc-editor.org/in-notes/rfc-index.xml On my Mac Pro benchmark figures for reading the entire file with XML::Simple vs Storable are: s/iter test1 test2 test1 47.8 -- -100% test2 0.148 32185% --

    Read the article

  • [PHP] background upload

    - by Robijntje007
    I am working with a form that allows me to upload files via a local folder and FTP. So I want to move files over ftp (which already works) Because of performance reasons I chose this process to run in the background so I use nfcftpput (linux) In CLI the following command works perfectly: ncftpput-b-u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip (Knowing that the b-parameter triggers background process) But if I run it via PHP it does not work (without the-b parameter it does) PHP code: $cmd = "ncftpput -b -u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip"; $return = exec($cmd);

    Read the article

  • Background upload in PHP

    - by Robijntje007
    I am working with a form that allows me to upload files via a local folder and FTP. So I want to move files over ftp (which already works) Because of performance reasons I chose this process to run in the background so I use nfcftpput (linux) In CLI the following command works perfectly: ncftpput-b-u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip (Knowing that the b-parameter triggers background process) But if I run it via PHP it does not work (without the-b parameter it does) PHP code: $cmd = "ncftpput -b -u name -p password -P 1980 127.0.0.1 /upload/ /home/Downloads/upload.zip"; $return = exec($cmd);

    Read the article

  • How do I debug my emacs crash on Windows?

    - by vedang
    I use emacs on windows (at work) and on linux (at home). On the windows machine, I'm using emacs 23.1 (from here: ftp://ftp.gnu.org/gnu/emacs/windows/emacs-23.1-bin-i386.zip). It just crashed right now. Recently, I've taken a healthy interest in debugging on windows (using WinDbg), so I really want to try my hand at this ready-made crash :) Can someone tell me where (or if at all) I can get the symbol files (.pdb) for emacs for windows? On linux, I compile my emacs from source so symbols aren't really a problem...

    Read the article

  • python filter can't output

    - by Jesse Siu
    i create filter by python to the log file like Sat Jun 2 03:32:13 2012 [pid 12461] CONNECT: Client "66.249.68.236" Sat Jun 2 03:32:13 2012 [pid 12460] [ftp] OK LOGIN: Client "66.249.68.236", anon password "[email protected]" Sat Jun 2 03:32:14 2012 [pid 12462] [ftp] OK DOWNLOAD: Client "66.249.68.236", "/pub/10.5524/100001_101000/100022/readme.txt", 451 bytes, 1.39Kbyte/sec the script is import time lines=[] f= open("/opt/CLiMB/Storage1/log/vsftp.log") line = f.readline() lines=[line for line in f] def OnlyRecent(line): if time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") < time.time()-(60*60*24*2): return True return False print"\n".join(filter(OnlyRecent,lines)) f.close() but when i run this script, it continue running but didn't show anything until i stop it. Why it can't shows records happened in 2 days.

    Read the article

  • iptables redirect single website traffic to port 8080

    - by Luke John Southard
    My goal is to be able to make a connection to one, and only one, website through a proxy. Everything else should be dropped. I have been able to do this successfully without a proxy with this code: ./iptables -I INPUT 1 -i lo -j ACCEPT ./iptabels -A OUTPUT -p udp --dport 53 -j ACCEPT ./iptables -A OUTPUT -p tcp -d www.website.com --dport 80 -j ACCEPT ./iptables -A INPUT -m conntrack --cstate ESTABLISHED,RELATED -j ACCEPT ./iptables -P INPUT DROP ./iptables -P OUTPUT DROP How could I do the same thing except redirect the traffic to port 8080 somewhere? I've been trying to redirect in the PREROUTING chain in the nat table. I'm unsure if this is the proper place to do that tho. Thanks for your help!

    Read the article

  • Translate from c# to c++

    - by Xaver
    Help to translate from c# code to c++ code. Type tp = Type.GetTypeFromProgID("Shell.Application"); object o = Activator.CreateInstance(tp); Object[] arg = new Object[1]; arg[0] = "C:\\!!"; object b = o.GetType().InvokeMember(@"NameSpace", BindingFlags.Public | BindingFlags.InvokeMethod, null, o, arg); arg[0] = "ftp://anonymous:[email protected]/bussys/1394"; b.GetType().InvokeMember(@"CopyHere", BindingFlags.Public | BindingFlags.InvokeMethod, null, b, arg);

    Read the article

  • How does the performance of pure Nginx compare to cpNginx?

    - by jb510
    There is now a Cpanel plugin to fairly easily setup Nginx as a reverse proxy on a Cpanel/Apache server. I've been simultaneously interested in setting up my first unmanaged VPS and my first Nginx server and as a masochist figured why not combine the two. I'm wondering however if it's worth setting up a pure Nginx server vs trying out cpNginx on Apache? My goal is solely to host WordPress sites and while what I've read raves about Nginx's is exceptional ability serving static at least as a reverse proxy, I am unclear if there is substantial benefit to running a pure nginx with eAccelorator over cpNginx on Apache for dynamic sites? Regardless I'll be running W3TC on all sites to cache content, but am still interested if there are big CPU reductions running PHP scripts under pure Nginx over cpNginx?

    Read the article

  • Nginx config - serving index.html not working

    - by Bill
    I can't figure out how to redirect / to index.html. I've gone through the threads on serverfault and I think I've tried every suggestion including: rewrite statements within location / index index.html at the server level, within location / and within static content moving node.js proxy statements to location ~ /i instead of within location / Obviously something is wrong somewhere else in my configuration. Here is my nginx.conf: worker_processes 1; pid /home/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; error_log /home/logs/error.log; access_log /home/logs/access.log combined; include sites-enabled/*; } and my server config located in sites-enabled server { root /home/www/public; listen 80; server_name localhost; # proxy request to node location / { index index.html index.htm; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:3010; proxy_redirect off; break; } # static content location ~ \.(?:ico|jpe?g|jpeg|gif|css|png|js|swf|xml|woff|eot|svg|ttf|html)$ { access_log off; add_header Pragma public; add_header Cache-Control public; expires 30d; } gzip on; gzip_vary on; gzip_http_version 1.0; gzip_comp_level 2; gzip_proxied any; gzip_min_length 1000; gzip_disable "msie6"; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; } Everything else is working just fine. Requests get proxied to node correctly and static content is served correctly. I just need to be able to forward requests made to / to /index.html.

    Read the article

  • OpenVPN and Squid Setup troubleshooting

    - by Adam
    I am trying to setup squid to tunnel via openvpn so that I can just enter an Ip and port in my browser settings and use it as a US proxy. My server is a OpenVZ VM. Running into some issues: I setup openvpn using : http://safesrv.net/install-openvpn-on-centos/ as part of that guide I also ran: iptables -t nat -A POSTROUTING -o venet0 -j SNAT --to-source iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -j SNAT --to-source Installed squid using this guide: http://www.server-world.info/en/note?os=CentOS_6&p=squid from that guide changed acl lan src 10.0.0.0/24 to acl lan src 10.8.0.0/24 Next, I went to my browser proxy settings and put - 10.8.0.1 in the HTTP field. Put the port I had setup in the squid config file and tried to load a page. Nothing connecting. Any help? What am I doing wrong?

    Read the article

  • Create Directory, 'cd' to it and download a file pipeline in Perl

    - by neversaint
    I have a file that looks like this: ftp://url1/files1.tar.gz dir1 ftp://url2/files2.txt dir2 .... many more... What I want to do are these steps: Create directory based on column 2 Unix 'cd' to that directory Download file with 'wget' based on column1 But how come this approach of mine doesn't work while(<>) { chomp; my ($url,$dir) = split(/\t/,$_); system("mkdir $dir"); system("cd $dir"); # Fail here system("wget $url"); # here too } What's the right way to do it?

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >