Search Results

Search found 9696 results on 388 pages for 'proxy authentication'.

Page 172/388 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • ssh login successful, but scp password gives me "Permission denied"

    - by YANewb
    I'm trying to get some blogging software up on an organizational remote server. I tried to set up a SSH Key but was having problems and decided that getting the blog up and running was more important than dealing with the SSH Key issue, so I ssh-keygen -R remoteserver.com. Now I can successfully login with ssh -v [email protected] and the correct password. Once logged in I can move around and read any file and directory that I should be able to read. But when I try to edit an existing -rw-r--r-- file with VIM, it shows up as read-only, if I try to edit permissions I get chmod: file.ext: Operation not permitted, and if I try to scp a new file from my local machine I'm prompted for the remote user's password, and then get scp: /home/path/to/file.ext: Permission denied. Since I didn't have any of these problems before I tried to set up the ssh key, I suspect these anomalies are a side effect of that, but I don't know how to troubleshoot this. So what does a foolish server-newb, such as myself, need to do to get edit capability back as a remote user? Addendum 1: My userids are different between my local machine and the remote server. For ssh I ssh -v [email protected]. if I whoami I get remoteuser For scp I scp file.ext [email protected]:/path/to/file.ext from the local directory with file.ext while logged in as the local user. if I whoami I get localuser The ls -l for two different files I've tried scp: -rw-r--r--@ 1 localuser localgroup 20 Feb 11 21:03 phpinfo.php -rw-r--r-- 1 root localgroup 4 Feb 11 22:32 test.txt The ls -l for the file I've tried to VIM: -rw-r--r-- 1 remoteuser remotegroup 76 Jul 27 2009 info.txt Addendum 2: In the past I've set up ssh-keys for git repositories. I don't want to completely destroy them, so in an attempt to follow a deer's train of thinking I renamed my ~/.ssh/ to ~/.ssh-bak/, then tested the different types of access. The abridged version of the terminal commands and results is below; I think everything is working until the 8th line from the end. localcomputer:~ localuser$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY The authenticity of host 'remoteserver.com (###.###.###.###)' can't be established. RSA key fingerprint is ##:##:##:##:##:##:##:##:##:##:##:##:##:##:##:##. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'remoteserver.com,###.###.###.###' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Last login: Sun Feb 12 18:00:54 2012 from 68.69.164.123 FreeBSD 6.4-RELEASE-p8 (VKERN) #1 r101746: Mon Aug 30 10:34:40 MDT 2010 [remoteuser@remoteserver /home]$ ls -l total ### -rw-r--r-- 1 remoteuser remotegroup 76 Aug 12 2009 info.txt [remoteuser@remoteserver /home]$ vim info.txt ~ {at the bottom of the VIM screen it tells me it's [read only]} [remoteuser@remoteserver /home]$ whoami remoteuser [remoteuser@remoteserver /home]$ logout debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to remoteserver.com closed. Transferred: sent 3872, received 12496 bytes, in 107.4 seconds Bytes per second: sent 36.1, received 116.4 debug1: Exit status 0 localcomputer:localdirectory name$ scp -v phpinfo.php [email protected]:/home/www/remotedirectory/phpinfo.php Executing: program /usr/bin/ssh host remoteserver.com, user remoteuser, command scp -v -t /home/www/remotedirectory/phpinfo.php OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'remoteserver.com' is known and matches the RSA host key. debug1: Found key in /Users/localuser/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending command: scp -v -t /home/www/remotedirectory/phpinfo.php Sending file modes: C0644 20 phpinfo.php Sink: C0644 20 phpinfo.php scp: /home/www/remotedirectory/phpinfo.php: Permission denied debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 1456, received 2160 bytes, in 0.6 seconds Bytes per second: sent 2322.3, received 3445.1 debug1: Exit status 1

    Read the article

  • Can Linux works as 802.1X authenticator in bridge mode ?

    - by Kartoch
    I want to create a lab for my students with netkit (a network emulator based on 802.1X) to study 802.1X. I can create the authentication server (FreeRadius) and configure the client with XSupplicant connected by a switch (a Linux in bridge mode). I'm looking to a way to configure the switch as Authenticator, i.e.: when the client is connected, the switch only forwards EAP packets to the authentication server. then the switch lets the user access to the local network when the authentication server authorizes the client. At the present time there is a lot of documentation to do this with a wireless point but no one for a switch. Does anyone have an idea or know the good software for it ?

    Read the article

  • unwanted password prompt pops up on web server?

    - by Paul
    my web server randomly asks for a password as though basic authentication is turned on. It's an IIS 7 web server and you have to specifically install basic authentication in the roles section. It isn't installed. The message that pops up is "Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentication without a secure connection)" I cannot reproduce the problem but a number of customers have reported the problem and it only seems to appear to a small number of customers. It pops up when they visit the homepage, nothing is generated by the IIS logs to indicate a password box is being served (e.g. no 401 errors etc) Can anyone offer any advice? Thanks

    Read the article

  • Serving static files fails - nginx

    - by Sergei
    Hi, I've been looking and trying around all night, but without success. I configured nginx to serve my static files and proxy all the other traffic: server { listen 80; server_name mydomain.com; access_log /home/boudewijn/www/bbt/brouwers/logs/access.log; error_log /home/boudewijn/www/bbt/brouwers/logs/error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/boudewijn/www/bbt/brouwers/; } } The proxy passing is no problem, but when I go to mydomain.com/media/ or try to access any testfile over there, it's without success. I paid attention to the difference between root and alias, my media folder exists, I paid attention to the trailing slashes, but still I get a 404 when trying to access my static media files. Any help?

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Issue in extending webapplication sharepoint

    - by GHIYA
    I have extended a webapplication in a farm. main server vsmoss1 where i did vsmoss1 ->webapplication(80) vm.com -> extended web app(of above one)anonymous WFE server name vsmoss2 WFE server name vsmoss3 i have load balanced it to got to vsmoss2 and vsmoss3 when someone hits vm.com when i hit vm.com it works fine without authentication(shows content query webpart also on my page) I know there is no need to do that but when I hit vsmoss2 and vsmoss3 it shows me error on my content query webpart ....any solution for that? Finding this strange tried this : I closed both extended webapp in vsmoss2 and vsmoss3 result: site is up and running but this time with authentication I closed both extended and main webapplication site in vsmoss2 and vsmoss3 is down I closed main webapplication in vsmoss2 and vsmoss3 site is up and running without authentication Anyone is having idea why this is showing behaviour like this...?

    Read the article

  • Apache ProxyPass ignore static files

    - by virtualeyes
    Having an issue with Apache front server connecting to a Jetty application server. I thought that ProxyPass ! in a location block was supposed to NOT pass on processing to the application server, but for some reason that is not happening in my case, Jetty shows a 404 on the missing statics (js, css, etc.) Here's my Apache (v 2.4, BTW) virtual host block: DocumentRoot /path/to/foo ServerName foo.com ServerAdmin [email protected] RewriteEngine On <Directory /path/to/foo> AllowOverride None Require all granted </Directory> ProxyRequests Off ProxyVia Off ProxyPreserveHost On <Proxy *> AddDefaultCharset off Order deny,allow Allow from all </Proxy> # don't pass through requests for statics (image,js,css, etc.) <Location /static/> ProxyPass ! </Location> <Location /> ProxyPass http://localhost:8081/ ProxyPassReverse http://localhost:8081/ SetEnv proxy-sendchunks 1 </Location>

    Read the article

  • Allow from referer for HTTP-basic protected SSL apache site

    - by user64204
    I have an apache site protected by HTTP basic authentication. The authentication is working fine. Now I would like to bypass authentication for users that are coming from a particular website by relying on the HTTP Referer header. Here is the configuration: SetEnvIf Referer "^http://.*.example\.org" coming_from_example_org <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Deny from all Allow from env=coming_from_example_org AuthName "login required" AuthUserFile /opt/http_basic_usernames_and_passwords AuthType Basic Require valid-user Satisfy Any </Directory> This is working fine for HTTP, but failing for HTTPS. My understanding is that in order to inspect the HTTP headers, the SSL handshake must be completed, but apache wants to inspect the <Directory> directives before doing the SSL handshake, even if I place them at the bottom of the configuration file. Q: How could I workaround this issue? PS: I'm not obsessed with the HTTP referer header, I could use other options that would allow users from a known website to bypass authantication.

    Read the article

  • Security when, ssh Private keys are lost

    - by Shree Mandadi
    Cant explain my problem enough with words, Let me take an example.. and please multiple the complexity by a 100 for the Solution. User-A has two ssh private keys, and over time has used this public key on a number of servers He lost one of them, and has created a new pair. How does User-A, inform me (Sys Admin), that he has lost his key, and How do I manage all the servers to which he had access to (I do not have a list, of all Servers that User-A has access to). In other words, How do I recall, the public key associated with this Private key. REF: In the LDAP based Authentication, All Servers would communicate with a single Server repository for Authentication, and If I remove acess or modify the password on the Server, all Systems that use this LDAP for Authentication are secured, when User-A loses his password..

    Read the article

  • IP Tunneling for Spotify? [closed]

    - by everwicked
    I was in the UK and enjoyed Spotify relentlessly. Now I've moved back to Greece and I can't even pay for the darn thing. So my idea was this- I have a server in France and it has a fail-over IP in the UK. So I installed a proxy server on it and made it listen to the UK IP. So far so good. Then, I played Spotify for a while through the proxy server just fine, and it thought I was in the UK. But now... it gives me an error message that I'm in another country than the one on my profile (UK). I don't really understand why - maybe they also geolocate the IP address of the client, not just the proxy server? Either way, I'm kinda stuck - is there a way to tunnel Spotify's network traffic through my server transparently? Maybe a VPN or something similar? Thanks

    Read the article

  • tastypie posting and full example

    - by Justin M
    Is there a full tastypie django example site and setup available for download? I have been wrestling with wrapping my head around it all day. I have the following code. Basically, I have a POST form that is handled with ajax. When I click "submit" on my form and the ajax request runs, the call returns "POST http://192.168.1.110:8000/api/private/client_basic_info/ 404 (NOT FOUND)" I have the URL configured alright, I think. I can access http://192.168.1.110:8000/api/private/client_basic_info/?format=json just fine. Am I missing some settings or making some fundamental errors in my methods? My intent is that each user can fill out/modify one and only one "client basic information" form/model. a page: {% extends "layout-column-100.html" %} {% load uni_form_tags sekizai_tags %} {% block title %}Basic Information{% endblock %} {% block main_content %} {% addtoblock "js" %} <script language="JavaScript"> $(document).ready( function() { $('#client_basic_info_form').submit(function (e) { form = $(this) form.find('span.error-message, span.success-message').remove() form.find('.invalid').removeClass('invalid') form.find('input[type="submit"]').attr('disabled', 'disabled') e.preventDefault(); var values = {} $.each($(this).serializeArray(), function(i, field) { values[field.name] = field.value; }) $.ajax({ type: 'POST', contentType: 'application/json', data: JSON.stringify(values), dataType: 'json', processData: false, url: '/api/private/client_basic_info/', success: function(data, status, jqXHR) { form.find('input[type="submit"]') .after('<span class="success-message">Saved successfully!</span>') .removeAttr('disabled') }, error: function(jqXHR, textStatus, errorThrown) { console.log(jqXHR) console.log(textStatus) console.log(errorThrown) var errors = JSON.parse(jqXHR.responseText) for (field in errors) { var field_error = errors[field][0] $('#id_' + field).addClass('invalid') .after('<span class="error-message">'+ field_error +'</span>') } form.find('input[type="submit"]').removeAttr('disabled') } }) // end $.ajax() }) // end $('#client_basic_info_form').submit() }) // end $(document).ready() </script> {% endaddtoblock %} {% uni_form form form.helper %} {% endblock %} resources from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) list_allowed_methods = ['get', 'post', ] detail_allowed_methods = ['get', 'post', 'put', 'delete'] Edit: My resources file is now: from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() #def apply_authorization_limits(self, request, object_list): # return object_list.filter(username=request.user) def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): # user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) #list_allowed_methods = ['get', 'post', ] #detail_allowed_methods = ['get', 'post', 'put', 'delete'] def apply_authorization_limits(self, request, object_list): return object_list.filter(user=request.user) I made the user field of the ClientBasicInfo nullable and the POST seems to work. I want to try updating the entry now. Would that just be appending the pk to the ajax url? For example /api/private/client_basic_info/21/? When I submit that form I get a 501 NOT IMPLEMENTED message. What exactly haven't I implemented? I am subclassing ModelResource, which should have all the ORM-related functions implemented according to the docs.

    Read the article

  • Windows Advanced Firewall certificate based IPSEC

    - by Tim Brigham
    I'm working on migrating from using IPSEC settings stored under the 'IP Security Policies on Active Directory' to using the 'Windows Firewall with Advanced Security' for my 2008+ boxes. I have successfully been able to get this set up using Kerberos authentication, however my openswan implementation on my Linux boxes is using certificates. Whenever I try changing the authentication method to computer certificate (using RSA and my root CA) the connection is bombing out. I've made this change at both a connection request policy and on the IPSEC settings on the root Windows Firewall with Advanced Security node. The windows event log shows the authentication request is taking place but failing negotiating a mode. What am I missing here?

    Read the article

  • Ubuntu SSH issue

    - by palani
    Hi, I have the ubntun machine... under my /home/user/.ssh i have my id_rsa.pub key... i carefully copied that key and paste my key to respository Git account. After that i try to connect from locat system to my git respository i got the following error warning: Authentication failed. Disconnected; no more authentication methods available (No further authentication methods available.). . I removed SSH in system and re-enable and did agin.. but no luck.... I have no idea what's happening with my SSH key ....can any one please tell me on this... Note : i noticed in my home /home/user/.ssh && /home/user/.ssh2

    Read the article

  • Ubuntu SSH issue

    - by palani
    0 Hi, I have the ubntun machine... under my /home/user/.ssh i have my id_rsa.pub key... i carefully copied that key and paste my key to respository Git account. After that i try to connect from locat system to my git respository i got the following error warning: Authentication failed. Disconnected; no more authentication methods available (No further authentication methods available.). . I removed SSH in system and re-enable and did agin.. but no luck.... I have no idea what's happening with my SSH key ....can any one please tell me on this... Note : i noticed in my home /home/user/.ssh && /home/user/.ssh2

    Read the article

  • How to deny access to disabled AD accounts via kerberos in pam_krb5?

    - by Phil
    I have a working AD/Linux/LDAP/KRB5 directory and authentication setup, with one small problem. When an account is disabled, SSH publickey authentication still allows user login. It's clear that kerberos clients can identify a disabled account, as kinit and kpasswd return "Clients credentials have been revoked" with no further password / interaction. Can PAM be configured (with "UsePAM yes" in sshd_config) to disallow logins for disabled accounts, where authentication is done by publickey? This doesn't seem to work: account [default=bad success=ok user_unknown=ignore] pam_krb5.so Please don't introduce winbind in your answer - we don't use it.

    Read the article

  • SSL client auth in nginx with multiple server section

    - by Bastien974
    I want to implement ssl_verify_client in nginx. This works perfectly when I only have one server section, which listen to 443. In my case I have multiple, all listening on 443 but to different server_name. For one particular server (proxy.mydomain.com), I'm adding the SSL client verify, but when I test the connectivity with openssl s_client -connect proxy.mydomain.com:443 -cert xxx.crt -key xxx.key and then do a GET / HTTP/1.1 host: proxy.mydomain.com It's not working, 400 No required SSL certificate was sent I think nginx is not receiving the proper server_name and is directing it to the first server listening to 443. So I tried to listen on another port and it worked right away. What's the issue and how can I fix it ?

    Read the article

  • 250k connections for comet with node.js

    - by Nenad
    How to implement node.js to be able to handle 250k connections as comet server (client side we use socket.io)? Would the use of nginx as proxy/loadbalancer be the right solution? Or will HA-Proxy be the better way? Has anyone real world experience with 100k+ connections and can share his setup? Would a setup like this be the right one (Quad core CPU per server - start 4 Instances of node.js per Server?): nginx (as proxy / load balancing server) / | \ / | \ / | \ / | \ node server #1 node server #2 node server #3 4 instances 4 instances 4 instances

    Read the article

  • How do I rewrite *.example.com to www.example.com?

    - by Lekensteyn
    In my network, I've some Ubuntu machines which need to download files from nl.archive.ubuntu.com. Since it's quite a waste of time to download everything multiple times, I've setup a squid proxy for caching the data. Another use for this proxy was rewriting requests for archive.ubuntu.com or *.archive.ubuntu.com to nl.archive.ubuntu.com because this mirror is faster than the US mirrors. This has worked quite well, but after a recent install of my caching machine, the configuration was lost. I remember having a separate perl program for handling this rewrite. How do I setup such a squid proxy which rewrites the host *.example.com to www.example.com and cache the result of the latter?

    Read the article

  • SSH authentification with public key fails

    - by palani
    I have my id_rsa.pub key under my /home/user/.ssh. I carefully copied that key and paste my key to respository Git account. While trying to connect from my local system to my git respository, I got the following error: warning: Authentication failed. Disconnected; no more authentication methods available (No further authentication methods available.) I removed SSH in system and re-enable and did again, but no luck. I have no idea what's happening with my SSH key. Can any one please tell me on this? Note : I noticed in my home /home/user/.ssh && /home/user/.ssh2

    Read the article

  • mysql cluster virtual ip

    - by user225995
    I am new in mysql cluster and mysql cluster and versions are not my choice. I setup four machines. Two of them manager , Two of them data cluster (ndb and mysqld). And i integrate with mysql utilities master/slave configuration. Everything working fine. Mysql version 5.6.17, ndb 7.3.5 , servers ubuntu 14.04. There will be no much transactions. The only important thing is HA. Everythings must be double. My problem is virtual ip. Since I have only one farm which have master slave configuration, how can i do it without proxy? If I must use proxy which proxy is better?

    Read the article

  • authenticating to exchange 2010 smarthost?

    - by Richard Hansen
    I have a postfix mail server that should relay all outgoing mail to an Exchange 2010 server (the Exchange box is my smarthost). I have administrator access to the Exchange 2010 system, but I'm not very familiar with it. How should I set up authentication on the Exchange 2010 system? I guess I could add a standard user with a mailbox on the Exchange box, then configure my postfix box to log in to port 587 to relay mail. That option doesn't feel right -- it seems like there should be way to do server to server authentication, not just client to server authentication. Is there? If so, how would I set it up?

    Read the article

  • Search Domain Not Working With Squid

    - by Kyle Brandt
    I just set up a squid proxy as a parent proxy to HAVP. When I or other users try to access a domain with an address like "http://foo" I get the following squid error in the browser: The dnsserver returned: Server Failure: The name server was unable to process this query. However, "http://foo.companyname.com" works fine. The search domain in resolv.conf on both the client and proxy host is companyname.com. (There a better term for "search domain"?) Is there a way to correct this, maybe something in the squid.conf file?.

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • Gui for viewing Apache headers

    - by user49249
    Is there any GUI for viewing Apache headers which are being served by a chain of Reverse Proxy Servers. I have a cloud which uses a few Proxy Servers in between the client and actual server which has to serve the original request. All servers are Unix Servers. And if there is a problem which I do not get a clue to then to be able to post them here downloading and doing an ftp of those headers with all the logs , loging in each time to each proxy server and Opening the browser and exporting the X display to some remote server each time from the chain and then observing HTTP_RESPONSES and checking the request from each of those servers and then posting log with configuration and response takes at least 2-3 hours to type an email. Is there a shorter way to do so?

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >