Search Results

Search found 4460 results on 179 pages for 'uninitialized proxy'.

Page 129/179 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • How to configure QoS on home router

    - by Joril
    I have a USR9105 router and I'd like to configure its QoS to prioritize web traffic (browser) over everything else (e.g. torrents). I'm confused by its interface though: "IP precedence" allows selecting a number from 0 to 7, while "IP type of service" can be one of "Normal Service", "Minimize cost", "Maximize reliability", "Maximize throughput" and "Minimize delay". How should I set it up? Is QoS the wrong solution to avoiding torrents slowing down browsing to a crawl? Should I set up a proxy and traffic shaping instead?

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    Hi, I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • Accessing the Internet via browser

    - by ucas
    I am on Windows 7 and using Firefox browser. I am using WiFi, but since the morning I cannot access the Internet via the browsers (Firefox, Chrome, or IE). The laptop shows there is Internet connection, Skype is online, but I can't reach the Internet. Then I launched Tor application which creates secure channel and provides its Firefox browser. Well, I can now access the Internet over that browser. So, what might be the problem causing this malfunction? The error: The connection has timed out The server at mail.google.com is taking too long to respond. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. Best regards

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • Routing to various node.js servers on same machine

    - by Dtang
    I'd like to set up multiple node.js servers on the same machine (but listening on different ports) for different projects (so I can pull any down to edit code without affecting the others). However I want to be able to access these web apps from a browser without typing in the port number, and instead map different urls to different ports: e.g. 45.23.12.01/app - 45.23.12.01:8001. I've considered using node-http-proxy for this, but it doesn't yet support SSL. My hunch is that nginx might be the most suitable. I've never set up nginx before - what configuration do I need to do? The examples of config files I've seen only deal with subdomains, which I don't have. Alternatively, is there a better (stable, hassle-free) way of hosting multiple apps under the same IP address?

    Read the article

  • Weird behaviour with OpenVPN: can not connect to a few websites

    - by Gaby Solis
    My OpenVPN server is Ubuntu 10.04.4 LTS and openvpn version is 2.x My client is on Win 7. He can access most sites but not Youtube, Facebook, Twitter, groups.google.com, etc My server.conf is: local x.x.x.x port 1194 proto udp dev tun ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status /etc/openvpn/keys/openvpn-status.log verb 4 I can access Youtube etc using SSH Tunnel + SOCKS Proxy, and the Ubuntu server can access all sites. so nothing is wrong with the Ubuntu server. With little information I can provide, I am not looking for a quck solution. How can I debug?

    Read the article

  • Nginx Ip Whitelist

    - by Will
    Is it possible to create a ip whitelist for my nginx proxy server without adding allow or deny in the config file is it possible i can get nginx to link to a separate database to check if the user is allowed to access the website . Ideally i could do with nginx linking to an external database or at minimum a list off allowed ips on the same server so i can easily update the list whit out restarting nginx every time. In the future i would like to link nginx to my website and a user will login and there ip will be linked to there account and they will be able to update there ip if it has changed to there new one to grant them access so i need to keep in mind that it would be easyer to do this if i have external list off ips in some kind off database any help is apreshiated

    Read the article

  • Windows redirect traffic to different DNS name not fixed IP address (hosts file equivalent)

    - by Arik Raffael Funke
    Using the Windows hosts file, one can redirect traffic for a domain to a specific IP address, e.g. domainA.com -- 127.0.0.1 I am looking for a SIMPLE way to do the same, but for a target domain name not for a target IP address (as this is dynamic), I.e. domainA.com -- domainB.com Addition: After the getting some initial answers I think I need to concretise my question. Situation: I have an application which looks up the IP of the target domain via DNS and then connects via HTTP to the IP address. I do not have control over any proxy settings. Option 1 Basically I am looking for a way to: intercept DNS requests for a domainA.com launch a DNS request for a domainB.com serve the IP of domainB.com in response to the request for domainA.com Without running an entire DNS server. Option 2 If a DNS server is the only way, in the alternative I would also be happy with an solution to how to define a non-standard DNS-server for a single application. Any ideas for wrapper applications, etc?

    Read the article

  • using Linux vncviewer

    - by Darkoni
    when i am connecting to VNC server using wine on linux $ wine vncviewer.exe i have to enter: VNC Server: 1.1.1.21 Proxy/Reapeter: 195.29.18.33:1234 and then, when i connect, on top there is txt: 1.1.1.21:5900 (195.29.18.33:1234) mine question is: how to connect using vncviewer ? what to put in VNC_VIA_CMD ? $ export xlocalPort=1234 $ export xremoteHost=1.1.1.21 $ export xremotePort=5900 $ export xgateway=195.29.18.33 $ export VNC_VIA_CMD="/usr/bin/ssh -f -L $xlocalPort:$xremoteHost:$xremotePort $xgateway sleep 20" $ vncviewer $xremoteHost -via $xgateway and i get error: unable connect to socket: Connection refused (111) i was trying to help myself with page http://www.tightvnc.com/vncviewer.1.php Please help, couse i need to use "native" linux vncviewer installed by $ yum install tigervnc tigervnc.i686 0:1.0.90-0.13.20100420svn4030.fc13

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

  • Mac won't boot into safe mode

    - by Stephen
    Mac boots fine normally, except when in safe mode. Holding down shift when booting gets me to the progress bar on the grey screen. Progress bar gets about half way before mac reboots. I modified nvram boot-args to get a better look: sudo nvram boot-args="-x -v" It definitely gets through fsck, skips loading kernel extensions (since it's in safe mode), does something with the network interfaces, then this is the last thing it wips through... Aug 22 11:56:21 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'com.apple.ServiceManagement.daemons.modify' by client '/usr/libexec/UserEventAgent' [10] for authorization created by '/usr/libexec/UserEventAgent' [10] (100012,0) Aug 22 11:56:22 Crockpot fseventsd[37]: event logs in /.fseventsd out of sync with volume. destroying old logs. (1 174 330) Aug 22 11:56:22 Crockpot fseventsd[37]: log dir: /.fseventsd getting new uuid: 5C379650-26FA-428F-B81F-4FE4349D50B3 Aug 22 11:56:23 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) starting OSXVers 12 Aug 22 11:56:23 Crockpot systemkeychain[35]: done file: /var/run/systemkeychaincheck.done Aug 22 11:56:23 Crockpot configd[17]: network changed: DNS* Aug 22 11:56:24 --- last message repeated 1 time --- Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2D_IPC: Loaded Aug 22 11:56:24 Crockpot mDNSResponder[39]: D2DInitialize succeeded Aug 22 11:56:24 Crockpot mDNSResponder[39]: Adding registration domain 273025955.members.btmm.icloud.com. Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe MAC AUTH succeeded Aug 22 11:56:24 Crockpot kernel[0]: MacAuthEvent en1 Auth result for: 00:23:69:35:dc:fe Unsolicited Auth Aug 22 11:56:24 Crockpot kernel[0]: wlEvent: en1 en1 Link UP virtIf = 0 Aug 22 11:56:24 Crockpot kernel[0]: AirPort: Link Up on en1 Aug 22 11:56:24 Crockpot kernel[0]: en1: BSSID changed to 00:23:69:35:dc:fe Aug 22 11:56:24 Crockpot kernel[0]: en1::IO80211Interface::postMessage bssid changed Aug 22 11:56:24 Crockpot kernel[0]: AirPort: RSN handshake complete on en1 Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences failed to read preferences data. Errno was 21 Aug 22 11:56:25 --- last message repeated 1 time --- Aug 22 11:56:25 Crockpot airportd[30]: _doAutoJoin: Already associated to “burnum”. Bailing on auto-join. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Can't load IOBluetoothSerialManager.kext - ineligible during safe boot. Aug 22 11:56:25 Crockpot com.apple.kextd[11]: Load com.apple.iokit.IOBluetoothSerialManager failed; removing personalities from kernel. Aug 22 11:56:25 Crockpot cfprefsd[19]: CFPreferences: error renaming file blued.plist.HXuEmQn to blued.plist. Aug 22 11:56:27 Crockpot awacsd[52]: Starting awacsd connectivity-77 (Jun 20 2012 15:40:49) Aug 22 11:56:27 Crockpot com.apple.SecurityServer[15]: Succeeded authorizing right 'system.services.systemconfiguration.network' by client '/System/Library/Frameworks/SystemConfiguration.framework/Versions/A/Resources/SCHelper' [54] for authorization created by '/usr/sbin/awacsd' [52] (100003,0) Aug 22 11:56:27 --- last message repeated 1 time --- Aug 22 11:56:27 Crockpot awacsd[52]: Configuring lazy AWACS client: 273025955.p04.members.btmm.icloud.com. Aug 22 11:56:28 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:32 --- last message repeated 1 time --- Aug 22 11:56:32 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:38 --- last message repeated 1 time --- Aug 22 11:56:38 Crockpot apsd[55]: CGSLookupServerRootPort: Failed to look up the port for "com.apple.windowserver.active" (1102) Aug 22 11:56:47 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:56:49 Crockpot configd[17]: subnet_route: write routing socket failed, Network is unreachable Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1+:169.254.80.161) DNS* Proxy+ SMB Aug 22 11:56:51 Crockpot UserEventAgent[10]: Captive: en1: Not probing 'burnum' (protected network) Aug 22 11:56:51 Crockpot configd[17]: network changed: v4(en1:169.254.80.161) DNS Proxy SMB Aug 22 11:57:07 Crockpot awacsd[52]: KV HTTP 0 Aug 22 11:57:23 Crockpot fseventsd[37]: Logging disabled completely for device:1: /Volumes/Recovery HD Aug 22 11:57:25 Crockpot kernel[0]: Kext loading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kext unloading now disabled. Aug 22 11:57:25 Crockpot mDNSResponder[39]: mDNSResponder mDNSResponder-379.27 (Jun 20 2012 15:40:55) stopping Aug 22 11:57:25 Crockpot com.apple.SecurityServer[15]: Killing auth hosts Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot configd[17]: dnssd_clientstub read_all(26) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: [0x7fb025119ff0] SCNetworkReachability _llq_callback w/error=-65563 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2D_IPC: Terminated Aug 22 11:57:25 Crockpot mDNSResponder[39]: D2DTerminate succeeded Aug 22 11:57:25 Crockpot awacsd[52]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 2 times --- Aug 22 11:57:25 Crockpot apsd[55]: dnssd_clientstub read_all(4) failed 0/28 0 Aug 22 11:57:25 Crockpot configd[17]: SCNC: stop, triggered by configd, type PPPSerial, reason Terminated All Aug 22 11:57:25 Crockpot configd[17]: _d2dCallback: D2D connection to mDNSResponder lost Aug 22 11:57:25 Crockpot UserEventAgent[10]: dnssd_clientstub DNSServiceProcessResult called with DNSServiceRef with no ProcessReply function Aug 22 11:57:25 --- last message repeated 4 times --- Aug 22 11:57:25 Crockpot kernel[0]: Kext autounloading now disabled. Aug 22 11:57:25 Crockpot kernel[0]: Kernel requests now disabled. ... before rebooting in the middle of the safe mode startup sequence. Aug 22 12:01:10 localhost bootlog[0]: BOOT_TIME 1345662070 0 Aug 22 12:01:32 localhost kernel[0]: PMAP: PCID enabled Aug 22 12:01:32 localhost kernel[0]: Darwin Kernel Version 12.0.0: Sun Jun 24 23:00:16 PDT 2012; root:xnu-2050.7.9~1/RELEASE_X86_64 Any ideas what's causing the safe mode boot to fail? System Info MacBook Pro 8,2 2.2 Ghz Core i7 4 GM Ram Mountain Lion 10.8 500GB TOSHIBA MK5065GSXF Serial-ATA rotational disk

    Read the article

  • Nginx Forward SSL for single site

    - by Will.brown
    I have a nginx server setup and it works fine for http however i would like to bypass the proxy for https connection. I want it so that when someone goes to my ip https:// ip1 (Nginx server) it bypasses ngix and forwards all traffic to https:// ip2(webserver) i do not need ngix to do this for any ssl website just one particular website. SO Client to https:// ip1 to https:/ /ip2 to https:// ip1 to client pc I just want the nginx to not intercept the connection and forward it on and on return forward the connection to client Im guessing i do this by nat mascarade buy not exactly sure how to do it and if i will need to tell nginx to ignore ssl aswell can someone help me please this has gone me stuck

    Read the article

  • Strange DNS problem [seems to be IPv6 issue]

    - by Homer J. Simpson
    Hi, I'm experiencing strange problems with my Kubuntu 9.10 when doing DNS requests from various applications. The requests are extremely slow, so loading any pages in Firefox or Konqueror, doing package installations in Kpackagemanager and other apps is really painful, while for example Opera doesnt have any problems, and ping is normally fast as well for DNS pings. I checked the proxy settings of both the used applications as well as of the general system and there are none, so to me it doesn't seem as there was something inbetween.. Does anybody have an idea on what to check for possible problem sources or how to solve this ? I'm behind a DSL home router which does the DHCP (and works well with my other computer). Any kind of advice would be really helpful. Edit: It seems to be some kind of IPv6 problem, as I could get it to work by disabling IPv6 explicitly in Firefox. Is there a general solution to this ?

    Read the article

  • squid running out of sockets

    - by drscroogemcduck
    I have a setup where squid sits in front of a java server and acts as a reverse proxy. Recently i've load tested the site and if i fire 100 threads at it each making a request using jmeter i start getting errors in my load test tool like 'no route to host' even though the load test tool and the server are on the same machine. if i run the following command where port 82 is the port my squid server is running on: netstat -ann | grep 82 | wc -l i get 22000 or something and most of them are in TIMED_WAIT. i'm thinking that maybe the huge number of sockets in the TIMED_WAIT state are starving the box of resources.

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

  • Modify HTML Content with Squid

    - by user38400
    We have setup our network as per the tutorial here: https://help.ubuntu.com/community/Upside-Down-TernetHowTo. Basically, we have a squid proxy that inverts images for pages that clients request. We're trying to modify the script so that we can edit the contents of the webpage before the webpage is sent to the client. We are not having any luck. I'm wondering if there is something different about .html files that makes this not possible. What is happening is that we do a wget on the URI that is requested, save it locally, modify it and then echo back the new URI. The page that the user gets is the unmodified page and not the one that we just changed.

    Read the article

  • mystery Internet traffic to port 445

    - by Ben Collver
    Recently, I noticed traffic from the office network to TCP port 445 on the Internet [a]. Below are the Linux firewall log entries to Facebook's network [b] and Google's network [c]. I would like to identify the source of this traffic. My first guess is that Facebook and Google might be using multiple TCP ports for SSL load balancing. However, I could not confirm this based on the web proxy logs. What else might it be? [a] http://support.microsoft.com/kb/204279 [b] Sep 4 08:30:03 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.131 DST=69.171.237.34 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=14287 DF PROTO=TCP SPT=51711 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0 [c] Aug 28 06:02:41 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.115 DST=173.194.33.47 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=4558 DF PROTO=TCP SPT=49294 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0

    Read the article

  • How to make AD highly available for applications that use it as an LDAP service

    - by Beaming Mel-Bin
    Our situation We currently have many web applications that use LDAP for authentication. For this, we point the web application to one of our AD domain controllers using the LDAPS port (636). When we have to update the Domain Controller, this has caused us issues because one more web application could depend on any DC. What we want We would like to point our web applications to a cluster "virtual" IP. This cluster will consist of at least two servers (so that each cluster server could be rotated out and updated). The cluster servers would then proxy LDAPS connections to the DCs and be able to figure out which one is available. Questions For anyone that has had experience with this: What software did you use for the cluster? Any caveats? Or perhaps a completely different architecture to accomplish something similar?

    Read the article

  • Cloud services, Public IPs and SIP

    - by Guido N
    I'm trying to run a custom SIP software (which uses JAIN SIP 1.2) on a cloud box. What I'd really like is to have a real public IP aka which is listed by "ifconfig -a" command. This is because atm I don't want to write additional SIP code / add a SIP proxy in order to manage private IP addresses / address translation. I gave Amazon EC2 a go, but as reported here http://stackoverflow.com/questions/10013549/sip-and-ec2-elastic-ips it's not fit for purpose (they do a 1:1 NAT translation between the private IP of the box and its Elastic IP). Does anyone know of a cloud service that provides real static public IP addresses?

    Read the article

  • iptables: limiting bytes downloaded per IP per day?

    - by Miles
    On a public-facing web server, I'd like to limit the total bytes downloaded per IP address per day. For example, after a visitor downloaded 100MB, any additional requests would be dropped or rejected for the next 24 hours. Is it possible to accomplish this using iptables alone? The connbytes, connlimit, hashlimit, quota, and recent options all look promising, but the man page plays its cards close to the vest (e.g., "quota - Implements network quotas by decrementing a byte counter with each packet. --quota bytes The quota in bytes."). Would like to avoid using a proxy (like Squid) if possible.

    Read the article

  • openldap proxied authorization

    - by bemace
    I'm having some trouble doing updates with proxied authorization (searches seem to work fine). I'm using UnboundID's LDAP SDK to connect to OpenLDAP, and sending a ProxiedAuthorizationV2RequestControl for dn: uid=me,dc=People,dc=example,dc=com with the update. I've tested and verified that the target user has permission to perform the operation, but I get insufficient access rights when I try to do it via proxy auth. I've configured olcAuthzPolicy=both in cn=config and authzTo={0}ldap:///dc=people,dc=example,dc=com??subordinate?(objectClass=inetOrgPerson) on the original user. The authzTo seems to be working; when I change it I get not authorized to assume identity when I try the update (also for searches). Can anyone suggest what else I should look at or how I could get more detailed errors from OpenLDAP? Anything else I can test to narrow down the source of the problem?

    Read the article

  • FTP error 424 failed to establish connection

    - by cKK
    Getting "ftp error 425 failed to establish connection" when trying to connect to ftp server. Tried 2 ftp clients on 3 machines on same network and none work. However FTP works from home / mobile broadband. No ip blocks on ftp sever. Other ftp servers(differrent ip/hosts) work okay. firewall setup correct, no ports blocked. Is it possible to use a proxy for ftp a i think it's something with the ISP but taking too long to fix?

    Read the article

  • Trying to test Domain Collapsing / Consoldiation validity for SEO purposes

    - by Roy Rico
    At work, we're trying to determine the effectiveness of domain collapsing for SEO purposes. Our current structure is to have multiple web apps served from different servers, such as PUBLIC URLS - directly accessed by users www1.somecompany.com/webapp1 www2.somecompany.com/webapp2 www3.somecompany.com/webapp3 I'm proposing to put an Apache proxy in front of these applications that will mask the different domains and route the requests to proper server PUBLIC URL--------routed/forwarded to-----PRIVATE URL www.somecompany.com/webapp1 <-----> www1.somecompany.com/webapp1 www.somecompany.com/webapp2 <-----> www2.somecompany.com/webapp2 www.somecompany.com/webapp3 <-----> www3.somecompany.com/webapp3 In terms of SEO/page rank value, does this help?

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >