Search Results

Search found 10675 results on 427 pages for 'dynamic proxy'.

Page 279/427 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Nginx Ip Whitelist

    - by Will
    Is it possible to create a ip whitelist for my nginx proxy server without adding allow or deny in the config file is it possible i can get nginx to link to a separate database to check if the user is allowed to access the website . Ideally i could do with nginx linking to an external database or at minimum a list off allowed ips on the same server so i can easily update the list whit out restarting nginx every time. In the future i would like to link nginx to my website and a user will login and there ip will be linked to there account and they will be able to update there ip if it has changed to there new one to grant them access so i need to keep in mind that it would be easyer to do this if i have external list off ips in some kind off database any help is apreshiated

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Why is this static routing not working ?

    - by geeko
    Greeting gurus, I'm trying to develop a DHCP enforcement extension like Microsoft NAP. My trick to block dynamic-IP requesting machines (that don't meet certain policy) is to strip the default gateway (no default gateway) stated in the IP lease and set the lease subnet mask to 255.255.255.255. Now I need the blocked machines to be able to reach some specific locations (IPs) on the network. To allow for this, I'm including some static routes in the lease. For example, I'm including 10.10.10.11 via router 10.10.10.254 (the one to which the blocked machine that needs to access 10.10.10.11 is connected). Unfortunately, as soon as I set the default gateway to nothing, blocked machines cannot reach any of the added static routes. I also tried classless static routes. Any ideas ? any one knows how MS NAP actually do it ? Geeko

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • nginx is not using gzip to talk to backend servers

    - by Michael Gorsuch
    Our web servers are running IIS 7 and are configured to compress dynamic and static content. When I hit these servers directly, gzip compression works. I recently placed nginx in front of them, and gzip compression has stopped. I was able to work around this by explicitly enabling gzip compression on nginx itself, but that seems a little inefficient considering I have half a dozen backends and only one active nginx box. It appears that nginx is stripping out the Accept-Encoding header. Does anyone have any advice for how to 'correct' this behavior? A sample configuration: upstream backend { server 127.0.0.1:8080; } server { listen 80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; location / { proxy_pass http://backend; } }

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Adding operation in middle of complex sequence diagram in visio 2003

    - by James
    I am using Microsoft Visio 2003 to define static classes with operations/methods and a sequence diagrams referring to these classes. The sequence diagram is almost done, but i realized that i missed one operation in middle of the diagram. When i try to move rest of the sequences down by selecting it as a block, all the operations in the block loose link with static diagrams. ( Methods which were referred to static classes as fun(), became fun, which means that now they no longer refer to static diagrams and any future changes would not be reflected in dynamic sequence diagrams automatically.) The sequence diagrams have grown to A3 size paper and i have many of such diagrams which needs correction. Manually moving the operations one by one would involve lots of effort. Could someone kindly suggest a way to overcome this problem?

    Read the article

  • How to set up JBoss with S3_Ping on AWS?

    - by Jonik
    I'm looking into running clustered JBoss on Amazon Web Services (AWS). I'd like to try out S3_PING, i.e. making JBoss use an S3 bucket for dynamic node discovery etc, since no multicast is available. I found a piece of example config XML related to S3_Ping, but I'm not sure where in JBoss installation you're supposed to configure this. So, what JBoss config files would I need to tweak to get S3_PING working? Can anyone point me to a more complete example? JBoss 5.1.0 GA. (This is probably more a JGroups/JBoss question than anything else. I've already got the S3 bucket for this set up, so no problem there.)

    Read the article

  • Modify HTML Content with Squid

    - by user38400
    We have setup our network as per the tutorial here: https://help.ubuntu.com/community/Upside-Down-TernetHowTo. Basically, we have a squid proxy that inverts images for pages that clients request. We're trying to modify the script so that we can edit the contents of the webpage before the webpage is sent to the client. We are not having any luck. I'm wondering if there is something different about .html files that makes this not possible. What is happening is that we do a wget on the URI that is requested, save it locally, modify it and then echo back the new URI. The page that the user gets is the unmodified page and not the one that we just changed.

    Read the article

  • How come I can't ping my home computer?

    - by bikefixxer
    I'm trying to set up a vpn into my home computer in order to access files from wherever. I have the home computer set up with a no-ip dynamic dns program so I can always connect, and have also tried using the actual ip address. However, when I try to connect or even ping from anywhere outside of my house I can't get through. I've tried putting that particular computer in the dmz, turned off the computers firewall and anti-virus, and I still don't get anything. I have comcast as my home internet provider. I have also tried from two different locations. Are there any other solutions I can try or is comcast the issue? I used to be able to do this when I ran a small web server at home for fun but now nothing works. Thanks in advance for any suggestions!

    Read the article

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • udhcpc doesn't assign ip address

    - by Diab
    i have a board running linux 2.6.28 and i have one Ethernet interface (eth0) i want dhcp to assign dynamic ip to this interface. i have busybox with udhcpc in the file system and the kernel has the "Pack Socket" enabled so i copied the scripts from "busybox-1.14.1/examples/udhcp" to my board on "/etc/udhcpc/" (i created this directory) and when i run : ifconfig eth0 up the interface is up but without ip address, then running udhcpc -i eth0 -s /etc/udhcpc/sample.script i get the following: note : sample.script contains : "exec /etc/udhcpc/sample.$1" # udhcpc -i eth0 -s /etc/udhcpc/sample.script udhcpc (v1.14.1) started Sending discover... Sending select for 192.168.10.198... Lease of 192.168.10.198 obtained, lease time 691200 but when i check with ifconfig i can see that it didn't assign the ip address to eth0. anyone have an idea why udhcpc didn't assign the ip ? Thanx

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

  • How to make AD highly available for applications that use it as an LDAP service

    - by Beaming Mel-Bin
    Our situation We currently have many web applications that use LDAP for authentication. For this, we point the web application to one of our AD domain controllers using the LDAPS port (636). When we have to update the Domain Controller, this has caused us issues because one more web application could depend on any DC. What we want We would like to point our web applications to a cluster "virtual" IP. This cluster will consist of at least two servers (so that each cluster server could be rotated out and updated). The cluster servers would then proxy LDAPS connections to the DCs and be able to figure out which one is available. Questions For anyone that has had experience with this: What software did you use for the cluster? Any caveats? Or perhaps a completely different architecture to accomplish something similar?

    Read the article

  • mystery Internet traffic to port 445

    - by Ben Collver
    Recently, I noticed traffic from the office network to TCP port 445 on the Internet [a]. Below are the Linux firewall log entries to Facebook's network [b] and Google's network [c]. I would like to identify the source of this traffic. My first guess is that Facebook and Google might be using multiple TCP ports for SSL load balancing. However, I could not confirm this based on the web proxy logs. What else might it be? [a] http://support.microsoft.com/kb/204279 [b] Sep 4 08:30:03 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.131 DST=69.171.237.34 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=14287 DF PROTO=TCP SPT=51711 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0 [c] Aug 28 06:02:41 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.115 DST=173.194.33.47 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=4558 DF PROTO=TCP SPT=49294 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0

    Read the article

  • Using FastCGI for PHP on Mac OS X

    - by DanieL
    I have apache2 running on a Mac OS X (10.6) machine, and it is currently serving PHP pages fine, using php5_module but I would like to configure fastcgi_module to handle the php pages. I have tried using the configuration found on www.fastcgi.com but I get the following error: [warn] FastCGI: (dynamic) server "/Path/to/script.php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds [warn] FastCGI: server "/usr/bin/php" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds I'm thinking this is because PHP has not been compiled with FastCGI, but seeing as it came with Mac OS X i'm not sure how to recompile it. Is this the problem? And if so, how do I recompile PHP with FastCGI?

    Read the article

  • FTP error 424 failed to establish connection

    - by cKK
    Getting "ftp error 425 failed to establish connection" when trying to connect to ftp server. Tried 2 ftp clients on 3 machines on same network and none work. However FTP works from home / mobile broadband. No ip blocks on ftp sever. Other ftp servers(differrent ip/hosts) work okay. firewall setup correct, no ports blocked. Is it possible to use a proxy for ftp a i think it's something with the ISP but taking too long to fix?

    Read the article

  • Cloud services, Public IPs and SIP

    - by Guido N
    I'm trying to run a custom SIP software (which uses JAIN SIP 1.2) on a cloud box. What I'd really like is to have a real public IP aka which is listed by "ifconfig -a" command. This is because atm I don't want to write additional SIP code / add a SIP proxy in order to manage private IP addresses / address translation. I gave Amazon EC2 a go, but as reported here http://stackoverflow.com/questions/10013549/sip-and-ec2-elastic-ips it's not fit for purpose (they do a 1:1 NAT translation between the private IP of the box and its Elastic IP). Does anyone know of a cloud service that provides real static public IP addresses?

    Read the article

  • Web Server for SVN+PHP+Django+Rails

    - by NetStudent
    Foreword: I am not asking for the differences between Nginx and Apache, nor do I want to start a "which one is better discussion. I would like to ask for help with choosing the most adequate solution for this particular situation. I need to setup one or more l SVN repositories accessible via HTTP, plus some PHP, Django and Ruby websites. However, and since I only have 512Mb of RAM at my disposal, I fear that Apache will be a too heavy choice... On the other hand, I have heard that Nginx does not fully support SVN (WebDAV) and Django without reverse proxying to Apache. Is this still true? Should I go for Apache/Nginx alone? Or should I set up both and have Nginx handling static content and proxying to Apacge for dynamic content?

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

  • Unable to connect my computer from LAN (http, smb) in UBUNTU 10.04

    - by Abdul Majeed
    I installed ubuntu 10.04, Apache, PHP, mysql, smb. Everything work fine in locally in my IP. When i trying to access my computer from LAN (other computer), it shows unable to connect. when i ping my IP from remote computer, its pinging OK. I can access internet, and all other systems (http, smb). But the problem is no one can't access my computer remotely in my LAN network. My ip is 192.168.85.105 and i want access(Appaceh,SMB) from 192.168.85.10. Is there any proxy firewall settings? I had tried following commands.. sudo iptables -F or sudo iptables-restore [logout require] If it does not work then try to disable net-filter sudo ufw --disable Please give me the solution.

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

  • openldap proxied authorization

    - by bemace
    I'm having some trouble doing updates with proxied authorization (searches seem to work fine). I'm using UnboundID's LDAP SDK to connect to OpenLDAP, and sending a ProxiedAuthorizationV2RequestControl for dn: uid=me,dc=People,dc=example,dc=com with the update. I've tested and verified that the target user has permission to perform the operation, but I get insufficient access rights when I try to do it via proxy auth. I've configured olcAuthzPolicy=both in cn=config and authzTo={0}ldap:///dc=people,dc=example,dc=com??subordinate?(objectClass=inetOrgPerson) on the original user. The authzTo seems to be working; when I change it I get not authorized to assume identity when I try the update (also for searches). Can anyone suggest what else I should look at or how I could get more detailed errors from OpenLDAP? Anything else I can test to narrow down the source of the problem?

    Read the article

  • Trying to test Domain Collapsing / Consoldiation validity for SEO purposes

    - by Roy Rico
    At work, we're trying to determine the effectiveness of domain collapsing for SEO purposes. Our current structure is to have multiple web apps served from different servers, such as PUBLIC URLS - directly accessed by users www1.somecompany.com/webapp1 www2.somecompany.com/webapp2 www3.somecompany.com/webapp3 I'm proposing to put an Apache proxy in front of these applications that will mask the different domains and route the requests to proper server PUBLIC URL--------routed/forwarded to-----PRIVATE URL www.somecompany.com/webapp1 <-----> www1.somecompany.com/webapp1 www.somecompany.com/webapp2 <-----> www2.somecompany.com/webapp2 www.somecompany.com/webapp3 <-----> www3.somecompany.com/webapp3 In terms of SEO/page rank value, does this help?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >