Search Results

Search found 33453 results on 1339 pages for 'alias method'.

Page 732/1339 | < Previous Page | 728 729 730 731 732 733 734 735 736 737 738 739  | Next Page >

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • HTTP Error 403.18 - Forbidden - asp.net mvc web api

    - by CoffeeCode
    I have deployed the default asp.net mvc 4 web api project to my windows server 2008 RC and am experiensing some issues with calling the web api actions. I'm quite new in the server/iis configuration part. I can open the home page, but the API part doesnt work. I'm getting such an error: HTTP Error 403.18 - Forbidden The specified request cannot be processed in the application pool that is configured for this resource on the Web server Module IIS Web Core Notification BeginRequest Handler StaticFile Error Code 0x00000000 Requested URL http://server.com:80/index.php?p=MvcApplication2_deploy/api/values/ Physical Path C:\Inetpub\vhosts\server.com\Webservice\index.php Logon Method Not yet determined Logon User Not yet determined I have checked Url Rewrite it is empty, have disabled WebDAV and also checked the Handler Mappings every thing seems to be ok there. Could any one give me some hints what could be wrong? Thanks!!!

    Read the article

  • Windows 2008 DFS Replication Issue

    - by e0594cn
    We have two Server 2008 R2 servers running DFS-R (named dfs01 and dfs02) in a 2008 R2 Domain. Today I found the files in server dfs01 can not be replicated to dfs02. So I used the command dfsrdiag backlog /rgname:<group> /rfname:<folder> /sendingmember:dfs01/receivingmember:dfs02 to check the backlog. After executing the command, I get the following error: Failed to execute GetVersionVector Method. Err: -2147217406 <0x80041002 operation Failed. How can I resolve this?

    Read the article

  • ssh tunnel error : channel 3: open failed: connect failed: Connection refused

    - by soroosh.strife
    I'm trying to access and browse internet through a ssh server so in my laptop (ubuntu 12.04) I do this: ssh -D 9999 root@server-ip then in the network proxy in my laptop I set: HTTP proxy 127.0.0.1 port 9999 but when I try to open a page in my browser it doesn't connect and in my terminal I get errors like these : channel 4: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused channel 5: open failed: connect failed: Connection refused channel 4: open failed: connect failed: Connection refused channel 6: open failed: connect failed: Connection refused I'm new to this and found this method on the internet so I'm don't know what I'm doing wrong. I'd really appreciate it if anyone can help me make this work.

    Read the article

  • how to build openvpn without libpam?

    - by hugemeow
    Since I have no root privilege to install libpam, I failed to run ./configure. So is there any method with which I can build openvpn without libpam? checking for OPENSSL_CRYPTO... yes checking for OPENSSL_SSL... yes checking for EVP_CIPHER_CTX_set_key_length... yes checking for ENGINE_load_builtin_engines... yes checking for ENGINE_register_all_complete... yes checking for ENGINE_cleanup... yes checking for ssl_init in -lpolarssl... no checking for aes_crypt_cbc in -lpolarssl... no checking for lzo1x_1_15_compress in -llzo2... no checking for lzo1x_1_15_compress in -llzo... no checking for PKCS11_HELPER... no checking git checkout... yes configure: error: libpam required but missing What's more, why I cannot disable libpam option? [mirror@innov openvpn]$ ./configure --help | grep libpam --enable-pam-dlopen dlopen libpam [default=no] C compiler flags for libpam LIBPAM_LIBS linker flags for libpam

    Read the article

  • Sharepoint 2007: author.dll status code?

    - by CrazyNick
    Is there a way to find any info using /_vti_bin/_vti_aut /author.dll status code? <html><head><title>vermeer RPC packet</title></head> <body> <p>method= <p>status= <ul> <li>status=393226 <li>osstatus=0 <li>msg=The form submission cannot be processed because it exceeded the maximum length allowed by the Web administrator. Please resubmit the form with less data. <li>osmsg= </ul> </body> </html>

    Read the article

  • Can't access dfs namespace over vpn

    - by cpf
    I've recently configured 2 servers in AD on the same domain level. They are physically separated and permanently connected through a site-to-site vpn for dfs replication. All well, but when users connect to either site through vpn (from home e.g.) they can't use the domain level method: \\domain.com\data Internally this works perfectly, resolving domain.com when connected through vpn gets the correct IP. I've tried Google to figure things out. What I was able to find was that more people have this issue, no real solution found though. Can anyone explain why this is happening? Especially a solution would be really helpful! Thanks in advance.

    Read the article

  • apache2 doesn't start with location

    - by Geod24
    I have a small domain, which I use only for personal purposes. I'm the main user, and have at most 3-4 users at the same time. I use apache2 with passenger to serve redmine. So I start with an empty apache2: root@xxxxx:/home/# service apache2 start [ ok ] Starting web server: apache2. root@xxxxx:/home/# a2dissite Your choices are: Which site(s) do you want to disable (wildcards ok)? Then enable my site, and restart (not reload) apache2: root@xxxxx:/home/# a2ensite 200-redmine Enabling site 200-redmine. To activate the new configuration, you need to run: service apache2 reload root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] The apache2 instance did not start within 20 seconds. Please read the log files to discover problems ... (warning). root@xxxxx:/home/# service apache2 restart [FAIL] Restarting web server: apache2 failed! [warn] There are processes named 'apache2' running which do not match your pid file which are left untouched in the name of safety, Please review the situation by hand. ... (warning). root@xxxxx:/home/# pidof apache2 20948 Here's my 200-redmine.conf: PerlLoadModule Apache::Redmine <VirtualHost *:80> ServerName redmine.xxxxx.xxx DocumentRoot /var/www/redmine/public/ ErrorLog ${APACHE_LOG_DIR}/redmine.error.log CustomLog ${APACHE_LOG_DIR}/redmine.access.log common MaxRequestLen 20971520 <Directory "/var/www/redmine/public/"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> SetEnv GIT_PROJECT_ROOT /opt/git/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend/ <Location /git> PerlAuthenHandler Apache::Authn::Redmine::authen_handler PerlAccessHandler Apache::Authn::Redmine::access_handler AuthType Basic Require valid-user AuthName "Redmine Git Repository" RedmineDSN "DBI:mysql:database=redmine;host=localhost:3306" RedmineDbUser "redmine" RedmineDbPass "password" RedmineCacheCredsMax 50 </Location> </VirtualHost> Now if I comment out the ScriptAlias / stuff, it works ! In addition, starting the server with 200-redmine disabled, then enabling it works. But apache2 will die randomly. Plus the location doesn't work. The logs show nothing: root@xxxxx:/home/# ll /var/log/apache2/ total 8 drwxr-xr-x 2 root root 4096 Oct 30 07:52 coredump -rw-r--r-- 1 root root 0 Nov 4 02:39 default.access.log -rw-r--r-- 1 root root 2356 Nov 4 02:39 default.error.log -rw-r--r-- 1 root root 0 Nov 4 02:39 other_vhosts_access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.access.log -rw-r--r-- 1 root root 0 Nov 4 02:39 redmine.error.log root@xxxxx:/home/# ll /var/log/apache2/coredump/ total 0 root@xxxxx:/home/# cat /var/log/apache2/default.error.log [ 2013-11-04 02:39:36.0130 21471/7fcf090f4740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21470', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.0255 21474/7f9a99fda740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/request [ 2013-11-04 02:39:36.0507 21479/7f8316b0f740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21470/generation-0/logging [ 2013-11-04 02:39:36.0511 21471/7fcf090f4740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! [ 2013-11-04 02:39:36.3158 21495/7fba6f686740 agents/Watchdog/Main.cpp:452 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nogroup', 'default_python' => 'python', 'default_ruby' => '/usr/bin/ruby', 'default_user' => 'nobody', 'log_level' => '0', 'max_instances_per_app' => '0', 'max_pool_size' => '6', 'passenger_root' => '/usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_pid' => '21491', 'web_server_type' => 'apache', 'web_server_worker_gid' => '33', 'web_server_worker_uid' => '33' } [ 2013-11-04 02:39:36.3304 21498/7f0106d9b740 agents/HelperAgent/Main.cpp:597 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/request [ 2013-11-04 02:39:36.3522 21503/7f92ad392740 agents/LoggingAgent/Main.cpp:330 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.21491/generation-0/logging [ 2013-11-04 02:39:36.3525 21495/7fba6f686740 agents/Watchdog/Main.cpp:635 ]: All Phusion Passenger agents started! And at last: root@xxxxx:/home/# apache2ctl -t -D DUMP_VHOSTS VirtualHost configuration: *:80 is a NameVirtualHost default server redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) port 80 namevhost redmine.xxxxx.xxx (/etc/apache2/sites-enabled/200-redmine.conf:5) root@xxxxx:/home/# uname -a Linux xxxx.xxx 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux root@xxxxx:/home/# dpkg --list | grep apache2 ii apache2 2.4.6-3 amd64 Apache HTTP Server ii apache2-bin 2.4.6-3 amd64 Apache HTTP Server (binary files and modules) ii apache2-data 2.4.6-3 all Apache HTTP Server (common files) ii apache2-utils 2.4.6-3 amd64 Apache HTTP Server (utility programs for web servers) ii libapache2-mod-fcgid 1:2.3.9-1 amd64 FastCGI interface module for Apache 2 ii libapache2-mod-passenger 4.0.10-1 amd64 Rails and Rack support for Apache2 ii libapache2-mod-perl2 2.0.8+httpd24-r1449661-6+b1 amd64 Integration of perl with the Apache2 web server ii libapache2-mod-perl2-dev 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - development files ii libapache2-mod-perl2-doc 2.0.8+httpd24-r1449661-6 all Integration of perl with the Apache2 web server - documentation ii libapache2-mod-proxy-html 1:2.4.6-3 amd64 Transitional package for apache2-bin ii libapache2-mod-svn 1.7.13-2 amd64 Apache Subversion server modules for Apache httpd ii libapache2-reload-perl 0.12-2 all module for reloading Perl modules when changed on disk ii libapache2-svn 1.7.13-2 all Apache Subversion server modules for Apache httpd (dummy package) root@xxxxx:/home/# a2dismod Your choices are: access_compat alias auth_basic authn_core authn_file authz_core authz_host authz_svn authz_user autoindex dav dav_svn deflate dir env fcgid filter mime mpm_event negotiation passenger perl proxy proxy_http rewrite setenvif status Which module(s) do you want to disable (wildcards ok)?

    Read the article

  • Problem with "Transfer-Encoding: chunked" in Apache 2.2

    - by Michal Niklas
    One of client of our web service uses axis2 application that sends HTTP 1.1 query with: Transfer-Encoding: chunked header. Such query is refused by our Apache 2.2 with message: <title>411 Length Required</title> </head><body> <h1>Length Required</h1> <p>A request of the requested method POST requires a valid Content-length.<br /> In Apache logs there is: [Mon May 17 09:06:04 2010] [error] [client 127.0.0.1] chunked Transfer-Encoding forbidden: /app/webservices/soap.hdb When I send such message without Transfer-Encoding: chunked and with Content-Length all works ok. I searched how to solve this problem, but I found only how to disable Transfer-Encoding: chunked on client side. Is there any way to do it on server side?

    Read the article

  • Best way to migrate from IIS6 to IIS6

    - by darko-romanov
    Hi, I need to move all my sites on a server with IIS 6 to another one, that has same OS (Windows Server 20003) and same IIS version. I'm trying to understand which is the best way to do it. Searching on Google I've found that there are at least 2 methods, one uses IIS Migration Tool, and another Web Deployment Tool. I don't know which method is best, it also seems that both methods can export one site at once, and I have about 100 sites hosted. What would you do?

    Read the article

  • Cross domain javascript form filling, reverse proxy

    - by Michel van Engelen
    I need a javascript form filler that can bypass the 'same origin policy' most modern browsers implement. I made a script that opens the desired website/form in a new browser. With the handler, returned by the window.open method, I want to retrieve the inputs with theWindowHandler.document.getElementById('inputx') and fill them (access denied). Is it possible to solve this problem by using Isapi Rewrite (official site) in IIS 6 acting like a reverse proxy? If so, how would I configure the reverse proxy? This is how far I got: RewriteEngine on RewriteLogLevel 9 LogLevel debug RewriteRule CarChecker https://the.actualcarchecker.com/CheckCar.aspx$1 [NC,P] The rewrite works, http://ourcompany.com/ourapplication/CarChecker, as evident in the logging. From within our companysite I can run the carchecker as if it was in our own domain. Except, the 'same origin policy' is still in force. Regards, Michel

    Read the article

  • Office365 DirSync Active Directory Integration

    - by dean
    I am preparing to deploy Office365 for my organization. We have an on premise Active Directory Domain Controller (Windows Server 2012 R2). We would like to leverage our Active Directory for: automatic user provisioning in Office365, and password synchronization, using the DirSync tool. Our Active Directory Domain is example.pvt. Email is currently Rackspace Exchange and email addresses follow the form [email protected]. Active Directory User Logon Name follows the form firstinitiallastname. My Questions are: What Active Directory Attribute(s) can be use in provisioning the email address in Office365? Is it possible to use the E-mail field in Active Directory to provision the email address in Office365? Will the fact that our Active Directory Domain has a different extension (.pvt vs. .com) cause a problem with our planned provisioning method?

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • Installing trunk Mono on Ubuntu

    - by kalvi
    I am quite new to linux. I have to install Mono on a linux machine from souce code. I know the general method: read-instructions, install-dependencies, ./configure, make, make install. However this approach doesn't fit into the general Ubuntu package management routine. Other programs I install from .debs won't be able to notice the version of Mono. Also I can't remove Mono using standard Ubuntu package management tools. Is there an easy solution? I have seen that Ubuntu actually has several separate packages for the Mono project. Should I build packages from Mono? How can I follow the same conventions as the ubuntu packagers? Where should I look for info on packaging? Can you give step by step instructions? Thanks!

    Read the article

  • VLC without border/window decoration in Windows

    - by timberwo7ves
    I'm trying to run VLC (2.1, 64 bit) without any chrome on Windows 7. You can achieve it by going to Preferences, and in the Interface tab, unchecking Integrate video in interface, and also in the Video tab, unchecking Window decorations. The problem lies in the fact that without Window decorations there is no apparent way to move or resize the video window - in GOM player, for example, you can move window by dragging on the video itself; is there an option for this in VLC? Ideally, I would like to move the window by the method described above (by dragging the video), and would like the Window decorations to reappear on mouseover, to allow resizing; I'm a new VLC user, but unsure how far the customisation goes. - I'd settle with just the moving of the window via dragging the video if this is possible by advanced setting. There is a similar question here, but not exactly, and no solution to that particular question.

    Read the article

  • Exchange 2007 Owa (OnlineVersion) can not authenticate

    - by DingosBarn
    Exchange Authentication dll: https://red002.mail.emea.microsoftonline.com/owa/auth/owaauth.dll sending style is: request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; And sending following message destination=https://red002.mail.emea.microsoftonline.com/owa/[email protected]/?ae=Folder&t=IPF.Appointment&[email protected]&password=xxxx I'm getting this error: The remote server returned an error: (400) Bad Request. If I use the path in a webbrowser it is accesable. It is not a bad request indeed. The server is Exchange server 2007 and replaced the path for owa. But it can not auth the path?

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • PHP, ANT and virtualhosts

    - by dbasch
    Hi all, I use the following standard folder structure with my projects: workspace myproject conf development.properties production.properties src build.xml build.properties build myproject Unfortunately, working with scripted languages nullifies the concept of separating the "workspace" from the "build". In my development environment, I use a virtual-host for each project. The virtual-host for a project is configured during the "deploytodevelopment" ANT task. Which method would you recommend for integrating PHP into my build process? Change the virtual-hosts setup to point to the workspace/myproject/src folder. Edit the PHP in the workspace/myproject/src folder. or Check out another working copy of the myproject/src folder to the build/myproject folder. Change the virtual-hosts setup to point to the build/myproject folder. Edit the PHP in the build/myproject folder.

    Read the article

  • What's the most efficient way to reclaim disk space after deleting lots of data from a database on Sybase ASE 15?

    - by Ernie Longmire
    As I understand it, based on some research but zero real-world experience with Sybase ASE, the only way to reclaim disk space once it's been allocated to a database is to export that database, create a new DB with the same schema, and reload all the exported data to the new database. Is this correct, or is there some other method? Then: assuming the above is correct and a full export-recreate-reload is required, what's the most efficient way to do that? Are there tools that will automate all or part of that process? I'm being told we would have to write separate bcp export and import commands for each and every object in the database, which if true sounds easily scriptable by someone who knows Sybase ASE well enough. (I don't.) This seems to me like a really basic housekeeping task, and it feels like I'm missing something obvious.

    Read the article

  • Bulk convert PNG-24 to PNG-8 files with best quality

    - by Gavin
    Hi, Can anybody recommend a good method of bulk converting a large amount of PNG-24 files to PNG-8 with as little loss of quality as possible and maintaining transparency? I've tried ImageMagick but the resulting images weren't quite as crisp quality as I'd like. Using Paint.NET I was able to achieve far better results, but I can't bulk process with this tool as far as I know. The settings I used with ImageMagick in case there's better options to use: convert file.png -depth 4 file-output.png I've also been playing with OptiPNG, but I haven't discovered a was of making sure the output images are PNG-8. Cheers, Gavin

    Read the article

  • Imaging Dell OEM Windows 7 install onto other Dell laptops purchased

    - by lolkjmz
    We have ordered 70 dell laptops directly from dell. We are wondering if we can take the existing OEM install of windows on one of the laptops, add some files and then deploy that image to the rest of the laptops. We are trying to avoid making an image from the ground up. I would imagine since we have purchased 70 laptops that the same install would work on all 70 - I am not sure though. I have imaged three laptops with this image and attached them all onto the domain with no problems. They are also all activated. Updates ran fine as well. We are using this to duplicate drives http://www.startech.com/HDD/Duplicators/4-Bay-USB-3-0-eSATA-to-SATA-Standalone-1-3-HDD-Hard-Drive-Duplicator-Dock~SATDOCK4U3RE Will this method work or is there potential for problems in the future?

    Read the article

  • Getting a list of patches in an HPSA patch policy

    - by asm
    I'm trying to get a list of patches contained in a Patch Policy in HPSA -- I can get what I need via the Twister web interface (under PatchPolicy.getPatches(), give it an ID, and it happily returns a list of patches contained.) -- I'm having a hard time getting this to work via the Pytwist interface, though... I haven't used the Pytwist interface for much besides some very basic Device manipulation, and Python is.. not my forte. I create the TwistServer object, then a PatchPolicy object from that (which I think is working..), but can't figure out how/where to call the getPatches() method from in Python-land. If there's a way to dig this out of the database itself, that would work, too, but I can't seem to find much in there along these lines besides the vendor-recommended patching stuff, and we use custom policies.

    Read the article

  • Error "403 Forbidden" on Sharepoint Search Settings Page

    - by user21924
    Hello I thought I had solved this nightmare by re-entering the values in my SSP properties set up, however accessing the Search Settings page error has reared it ugly head again. Now all solutions point to this method listed here * http://www.routtlogics.com/blog/Lists/Posts/Post.aspx?ID=6 * http://social.technet.microsoft.com/Forums/en-US/sharepointadmin/thread/f00651cd-e452-45b9-b19e-90e89c3c3ad4 * http://blogs.technet.com/sushrao/archive/2009/03/26/microsoft-office-sharepoint-server-2007-moss-403-forbidden-error-when-clicked-on-search-settings-page.aspx The above workaround(s) basically states that granting the local group WSS_WPG read and write permission to the Task folder in the Windows directory would solve the problem, however whenever I try to change to the permission attribute of this folder I get an access denied message, even when logged in as a Domain administrator, Enterprise and even the SharePoint Farm administrator. Please guys how do I get around this access denied issue. Thanks

    Read the article

  • Word 2010 & 2007 Blue Background on screen as default

    - by poor1
    Default blue background and white text in Microsoft Word. I have just moved to Word 2010 (Student Version now released) and although it is possible to create individual documents with a blue background it is not possible to set the program with a blue background as a default. I understand this was discontinued with Office 2007. The only way I can open a document with a blue background is to create a Template with a blue background and use that for each document I wish to create. I'm sure there must be a method of hacking the registry to accomplish this. Can you assist. There must be countless people who who they knew how.

    Read the article

< Previous Page | 728 729 730 731 732 733 734 735 736 737 738 739  | Next Page >