Search Results

Search found 5868 results on 235 pages for 'reverse proxy'.

Page 221/235 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • How to redirect http requests to https (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. # MANAGED BY PUPPET upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } # setup server with or without https depending on gitlab::gitlab_ssl variable server { listen *:80; server_name gitlab.localdomain; server_tokens off; root /nowhere; rewrite ^ https://$server_name$request_uri permanent; } server { listen *:443 ssl default_server; server_name gitlab.localdomain; server_tokens off; root /home/git/gitlab/public; ssl on; ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem; ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Ssl on; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? I've also tried the following configurations listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; The logs tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Routing not working correctly using the laravel framework

    - by samayres1992
    I'm using the book wrote by one of the guys that created laravel, so I'd like to think for the most part this code isn't horribly wrong. My server is setup with nginx serving all static files and apache2 serving php. My config for each are the following: apache2: <VirtualHost *> # Host that will serve this project. ServerName litl.it # The location of our projects public directory. DocumentRoot /var/www/litl.it/laravel/public # Useful logs for debug. CustomLog /var/log/apache.access.log common ErrorLog /var/log/apache.error.log # Rewrites for pretty URLs, better not to rely on .htaccess. <Directory /var/www/litl.it/laravel/public> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> </Directory> nginx: server { # Port that the web server will listen on. listen 80; # Host that will serve this project. server_name litl.it *.litl.it; # Useful logs for debug. access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; rewrite_log on; # The location of our projects public directory. root /var/www/litl.it/laravel/public; # Point index to the Laravel front controller. index index.php; location / { # URLs to attempt, including pretty ones. try_files $uri $uri/ /index.php?$query_string; } # Remove trailing slash to please routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } # PHP FPM configuration. location ~* \.php$ { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; try_files index index.php $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/php/$fastcgi_script_name; } # We don't need .ht files with nginx. location ~ /\.ht { deny all; } location @proxy { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy_params; } error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 501 502 503 504 /error/5xx.html; location ^~ /error/ { internal; root /var/www/litl.it/lavarel/public/error; } } I'm including these server configs, as I feel this maybe the issue? Here is my incredibly basic routing file that should return "routing is working" on domain.com/test but instead it just returns the homepage. <?php Route::get('/', function() { return View::make('hello'); }); Route::get('/test', function() { return "routing is working"; }); Any ideas where I'm going wrong, I'm following this tutorial very closely and I'm confused why there is issue. Thanks!

    Read the article

  • Unable to find valid certification path to requested target while CAS authentication

    - by Dmitriy Sukharev
    I'm trying to configure CAS authentication. It requires both CAS and client application to use HTTPS protocol. Unfortunately we should use self-signed certificate (with CN that doesn't have anything in common with our server). Also the server is behind firewall and we have only two ports (ssh and https) visible. As far as there're several application that should be visible externally, we use Apache for ajp reverse proxying requests to these applications. Secure connections are managed by Apache, and all Tomcat are not configured to work with SSL. But I obtained exception while authentication, therefore desided to set keystore in CATALINA_OPTS: export CATALINA_OPTS="-Djavax.net.ssl.keyStore=/path/to/tomcat/ssl/cert.pfx -Djavax.net.ssl.keyStoreType=PKCS12 -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.keyAlias=alias -Djavax.net.debug=ssl" cert.pfx was obtained from certificate and key that are used by Apache HTTP Server: $ openssl pkcs12 -export -out /path/to/tomcat/ssl/cert.pfx -inkey /path/to/apache2/ssl/server-key.pem -in /path/to/apache2/ssl/server-cert.pem When I try to authenticate a user I obtain the following exception: Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:174) ~[na:1.6.0_32] at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238) ~[na:1.6.0_32] at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:318) ~[na:1.6.0_32] Meanwhile I can see in catalina.out that Tomcat see certificate in cert.pfx and it's the same as the one that is used while authentication: 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Constructing validation url: https://external-ip/cas/proxyValidate?pgtUrl=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_proxyreceptor&ticket=ST-17-PN26WtdsZqNmpUBS59RC-cas&service=https%3A%2F%2Fexternal-ip%2Fclient%2Fj_spring_cas_security_check 09:11:38.886 [http-bio-8080-exec-2] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. keyStore is : /path/to/tomcat/ssl/cert.pfx keyStore type is : PKCS12 keyStore provider is : init keystore init keymanager of type SunX509 *** found key for : 1 chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** trustStore is: /jdk-home-folder/jre/lib/security/cacerts Here is a lot of trusted CAs. Here is nothing related to our certicate or our (not trusted) CA. ... 09:11:39.731 [http-bio-8080-exec-4] DEBUG o.j.c.c.v.Cas20ProxyTicketValidator - Retrieving response from server. Allow unsafe renegotiation: false Allow legacy hello messages: true Is initial handshake: true Is secure renegotiation: false %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 63, 239, 180, 32, 103, 140, 83, 7, 109, 149, 177, 80, 223, 79, 243, 244, 60, 191, 124, 139, 108, 5, 122, 238, 146, 1, 54, 218 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SHA, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV] Compression Methods: { 0 } *** http-bio-8080-exec-4, WRITE: TLSv1 Handshake, length = 75 http-bio-8080-exec-4, WRITE: SSLv2 client hello message, length = 101 http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 81 *** ServerHello, TLSv1 RandomCookie: GMT: 1347433643 bytes = { 145, 237, 232, 63, 240, 104, 234, 201, 148, 235, 12, 222, 60, 75, 174, 0, 103, 38, 196, 181, 27, 226, 243, 61, 34, 7, 107, 72 } Session ID: {79, 202, 117, 79, 130, 216, 168, 38, 68, 29, 182, 82, 16, 25, 251, 66, 93, 108, 49, 133, 92, 108, 198, 23, 120, 120, 135, 151, 15, 13, 199, 87} Cipher Suite: SSL_RSA_WITH_RC4_128_SHA Compression Method: 0 Extension renegotiation_info, renegotiated_connection: <empty> *** %% Created: [Session-2, SSL_RSA_WITH_RC4_128_SHA] ** SSL_RSA_WITH_RC4_128_SHA http-bio-8080-exec-4, READ: TLSv1 Handshake, length = 609 *** Certificate chain chain [0] = [ [ Version: V1 Subject: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5 Key: Sun RSA public key, 1024 bits modulus: 13??a lot of digits here??19 public exponent: ????7 Validity: [From: Tue Apr 24 16:32:18 CEST 2012, To: Wed Apr 24 16:32:18 CEST 2013] Issuer: CN=wrong.domain.name, O=Our organization, L=Location, ST=State, C=Country SerialNumber: [ d??????? ????????] ] Algorithm: [SHA1withRSA] Signature: 0000: 65 Signature is here 0070: 96 . ] *** http-bio-8080-exec-4, SEND TLSv1 ALERT: fatal, description = certificate_unknown http-bio-8080-exec-4, WRITE: TLSv1 Alert, length = 2 http-bio-8080-exec-4, called closeSocket() http-bio-8080-exec-4, handling exception: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target I tried to convert our pem certificate to der format and imported it to trustedKeyStore (cacerts) (without private key), but it didn't change anything. But I'm not confident that I did it rigth. Also I must inform you that I don't know passphrase for our servier-key.pem file, and probably it differs from password for keystore created by me. OS: CentOS 6.2 Architecture: x64 Tomcat version: 7 Apache HTTP Server version: 2.4 Is there any way to make Tomcat accepts our certificate?

    Read the article

  • Why are my httpd mpm_prefork processes being reaped so quickly?

    - by Dan Pritts
    We've got a system running RHEL6, x64. We are using a local installation of apache 2.2.22 from source. we serve primarily: mod_perl applications (with a local installation of perl 5.16.0) tomcat applications proxied with mod_jk Here is some context; the main question is below. All of this talks to an Oracle backend. We are having issues with Oracle becoming unresponsive. We think this is because we're hitting the maximum process limit in oracle. We've upped the process limit, but now we are hitting memory pressure on the oracle server. We have tons of oracle sessions sitting idle. I can trace a bunch of them back to the httpd processes. We have mod_perl's Apache::DBI start up a new connection to the database with each httpd child that's spawned. We are concerned that these are not always getting closed out properly when the httpd's exit...and the httpd's are exiting very frequently. I know that it would be good to modify the mod_perl applications to use some better form of db connection pooling; we plan to pursue that but would like to solve our immediate problem sooner. So here's the main question. We are using the prefork MPM. The apache child processes are lasting at most a few minutes. Log analysis shows that each one is serving fewer than 50 clients before exiting; the last request each child serves is OPTIONS * HTTP/1.0 on some sort of internal connection; I'm under the impression that this is a "ping" from the master process. I've adjusted the MPM config as follows. I didn't want to raise MinSpareServers too high, because, after all, i'm trying to minimize the number of sessions to oracle. MinSpareServers 5 MaxSpareServers 30 MaxClients 150 MaxRequestsPerChild 10000 Right now we're serving 250-300 requests per minute. We've got 21 httpd's running, the eldest (other than the master, owned by root) being 3 minutes old. This rate of reaping of the apache children really seems excessive. What could be causing it? Apache was built with: $ ./configure --prefix=/opt/apache --with-ssl=/usr/lib --enable-expires --enable-ext-filter --enable-info --enable-mime-magic --enable-rewrite --enable-so --enable-speling --enable-ssl --enable-usertrack --enable-proxy --enable-headers --enable-log-forensic Apache config info: % /opt/apache/bin/httpd -V Server version: Apache/2.2.22 (Unix) Server built: Jul 23 2012 22:30:13 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.5, APR-Util 1.4.1 Compiled using: APR 1.4.5, APR-Util 1.4.1 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/opt/apache" -D SUEXEC_BIN="/opt/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" modules are compiled into apache rather than shared libs: % /opt/apache/bin/httpd -l Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_ext_filter.c mod_include.c mod_filter.c mod_log_config.c mod_log_forensic.c mod_env.c mod_mime_magic.c mod_expires.c mod_headers.c mod_usertrack.c mod_setenvif.c mod_version.c mod_proxy.c mod_proxy_connect.c mod_proxy_ftp.c mod_proxy_http.c mod_proxy_scgi.c mod_proxy_ajp.c mod_proxy_balancer.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_info.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_speling.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c One final note - the red hat httpd, apr, and perl packages are all installed, but ldd shows that none of those libraries are linked with the running httpd.

    Read the article

  • Troubleshooting Website problems within the local network

    - by HaydnWVN
    Have an external website which opens fine on some PC's, yet seems to time out (or symptoms of timing out, but never actually does) on others. Seems to only affect (some) of our newer HP Pro 3305 MT Workstations. All of which are running Win7 32bit SP1 with all updates. Older PC's (Win7 32bit SP1 & WinXP) are unaffected. Using Google Chrome & Firefox makes no difference. Opening the website in IE9 Compatibility Mode has exactly the same symptoms. All PC's are on the same local network (Workgroup) using the same DNS server & gateway (inhouse) on the same internet connection, on the same subnet. There is no proxy server, no content filtering, no load balancing etc etc. Only group policy in effect (locally) is for Update scheduling. Local firewalls are all the same (Kaspersky WP4) and our external facing firewall has no IP specific settings. I have no control over the external website, traceroute shows the same destination on all PC's. It is a fairly popular website in our industry (Horticulture) and i'm not aware of any other people (even other sites within our sister companies) with the same problem. Update: Used Fiddler2 to monitor the HTTP request, seems its not getting fulfilled for some reason?! Request sent: GET http://www.rhs.org.uk/ HTTP/1.1 Host: www.rhs.org.uk Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.47 Safari/536.11 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Log from Fiddler 2 of the request: This session is not yet complete. Press F5 to refresh when session is complete for updated statistics. Request Count: 1 Bytes Sent: 567 (headers:567; body:0) Bytes Received: 0 (headers:0; body:0) ACTUAL PERFORMANCE -------------- ClientConnected: 17:02:33.720 ClientBeginRequest: 17:02:39.118 GotRequestHeaders: 17:02:39.118 ClientDoneRequest: 17:02:39.118 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 46ms HTTPS Handshake: 0ms ServerConnected: 17:02:39.165 FiddlerBeginRequest: 17:02:39.165 ServerGotRequest: 17:02:39.165 ServerBeginResponse: 00:00:00.000 GotResponseHeaders: 00:00:00.000 ServerDoneResponse: 00:00:00.000 ClientBeginResponse: 00:00:00.000 ClientDoneResponse: 00:00:00.000 RESPONSE BYTES (by Content-Type) -------------- ~headers~: 0 Log of a successful request from a working PC (done this morning, excuse the timestamps being different from above): Request Count: 1 Bytes Sent: 493 (headers:493; body:0) Bytes Received: 20,413 (headers:525; body:19,888) ACTUAL PERFORMANCE -------------- ClientConnected: 08:22:47.766 ClientBeginRequest: 08:22:47.766 GotRequestHeaders: 08:22:47.766 ClientDoneRequest: 08:22:47.766 Determine Gateway: 0ms DNS Lookup: 26ms TCP/IP Connect: 30ms HTTPS Handshake: 0ms ServerConnected: 08:22:47.828 FiddlerBeginRequest: 08:22:47.828 ServerGotRequest: 08:22:47.828 ServerBeginResponse: 08:22:48.905 GotResponseHeaders: 08:22:48.905 ServerDoneResponse: 08:22:48.905 ClientBeginResponse: 08:22:48.905 ClientDoneResponse: 08:22:48.905 Overall Elapsed: 00:00:01.1388020 RESPONSE BYTES (by Content-Type) -------------- text/html: 19,888 ~headers~: 525 So my question has evolved into: What is the difference between the 2 requests and how do I determine why 1 PC is not getting a reply to it's GET request?

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • pptpd not working externally on Ubuntu Server 11.10

    - by Brendan
    I am trying to set up a pptpd vpn on our newly installed Ubuntu 11.10 64 bit server, but am not having success having a client connect via an iPhone to the VPN. Note that no clients have been able to connect to this VPN from outside of the network. The system is up to date with patches. Here is the output of /var/log/syslog. Please note that 222.153.x.y is my remote IP address. Mar 30 22:07:47 server pptpd[9546]: CTRL: Client 222.153.x.y control connection started Mar 30 22:07:47 server pptpd[9546]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:07:47 server pppd[9555]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:07:47 server pppd[9555]: pppd 2.4.5 started by root, uid 0 Mar 30 22:07:47 server pppd[9555]: Using interface ppp0 Mar 30 22:07:47 server pppd[9555]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:07:47 server pptpd[9546]: GRE: Bad checksum from pppd. Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests Mar 30 22:08:17 server pppd[9555]: Connection terminated. Mar 30 22:08:17 server pppd[9555]: Modem hangup Mar 30 22:08:17 server pppd[9555]: Exit. Mar 30 22:08:17 server pptpd[9546]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Mar 30 22:08:17 server pptpd[9546]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Mar 30 22:08:17 server pptpd[9546]: CTRL: Reaping child PPP[9555] Mar 30 22:08:17 server pptpd[9546]: CTRL: Client 222.153.x.y control connection finished As you can see, the problem seems to be the connection timing out after 30 seconds ("Mar 30 22:08:17 server pppd[9555]: LCP: timeout sending Config-Requests". Over Wifi however (inside the local network) there are no issues: Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Client 192.168.0.100 control connection started Mar 30 22:12:33 unreal-server pptpd[12406]: CTRL: Starting call (launching pppd, opening GRE) Mar 30 22:12:33 unreal-server pppd[12407]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Mar 30 22:12:33 unreal-server pppd[12407]: pppd 2.4.5 started by root, uid 0 Mar 30 22:12:33 unreal-server pppd[12407]: Using interface ppp0 Mar 30 22:12:33 unreal-server pppd[12407]: Connect: ppp0 <--> /dev/pts/3 Mar 30 22:12:33 unreal-server pptpd[12406]: GRE: Bad checksum from pppd. Mar 30 22:12:36 unreal-server pppd[12407]: peer from calling number 192.168.0.100 authorized Mar 30 22:12:36 unreal-server pppd[12407]: MPPE 128-bit stateless compression enabled Mar 30 22:12:36 unreal-server pppd[12407]: Cannot determine ethernet address for proxy ARP Mar 30 22:12:36 unreal-server pppd[12407]: local IP address 192.168.0.10 Mar 30 22:12:36 unreal-server pppd[12407]: remote IP address 192.168.1.1 I have set up an iptables config for the server; to check this isn't the problem I allowed all traffic temporarily, but this does NOT change the symptoms in the first example. Here is the output from /etc/iptables.rules.save *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT Even with these rules applied, the output from /var/log/syslog is LINE FOR LINE what I saw in the the first block of code. Please note that before running this Ubuntu server; an old SME Server box was running in place of it, that had a pptpd server on it just like we are using, and we experienced no issues.

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • phpMyAdmin setup issues

    - by EquinoX
    I am trying to follow the tutorial here to setup the user and pass. It says there that "this section is only applicable if your MySQL server is running with --skip-show-database". First question is, how do I check if MySQl server is running with --skip-show-database? Is there any way I can access phpMyAdmin SQL query window without logging in? Otherwise I'd have to execute this SQL from command line. I am also getting this: Cannot load mcrypt extension. Please check your PHP configuration. I have added mcrypt.so to php.ini and doing the following command proves that I have it. [root@DT html]# rpm -qa | grep mcrypt mcrypt-2.6.8-1.el5 php-mcrypt-5.3.5-1.1.w5 libmcrypt-2.5.8-4.el5.centos [root@DT html]# php -v PHP 5.3.5 (cli) (built: Feb 19 2011 13:10:09) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies Now when I go to phpinfo() and search for mcrypt it can find it inside the Configure Command row ('--with-mcrypt=shared,/usr'). So, what to do next?. UPDATE: I didn't put extension=mcrypt.so in php.ini as it will complain the following: PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 Here's my nginx.conf: #user nobody; worker_processes 2; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; gzip on; server { listen 80; root /usr/share/nginx/html; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { #root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { #root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { #root /usr/local/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script _name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by Pawz
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. I am using "fragment 1400" and "mssfix" on all three machines. An mtu-test on both connections is successful. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ? FWIW, pinging 10.0.1.253 from Kino has the same result - the traffic does not get forwarded. UPDATE: I've found that I can only ping certain IP's on Guchuko's LAN from Outpost. From Outpost I can ping 192.168.0.3 and 192.168.0.2, but 192.168.99 and 192.168.0.4 are unreachable. The same tcpdump behavior can be seen. I think this means the problem can't be due to ipforwarding or routing, because Outpost can reach SOME hosts on Guchuko's LAN but not others and likewise, Guchuko can reach two hosts on Outpost's LAN, but not others. This baffles me.

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • suddenly can't connect to router

    - by Khoi
    I was just downloading some stuff in ubuntu and snap, the connection cut and I can't even connect to my router. And the router, it still works fine, my laptop can connect wirelessly to it as usual. But my main computer (which connects to it directly through cable) can't even ping it. Here is my ipconfig: Windows IP Configuration Host Name . . . . . . . . . . . . : vento Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Realtek RTL8169/8110 Family Gigabit Ethernet NIC Physical Address. . . . . . . . . : 00-19-DB-4E-6C-56 Ethernet adapter {15B1F740-2F35-4FE4-9FEE-4052AFBAD096}: Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : Anchorfree HSS Adapter - Packet Sche duler Miniport Physical Address. . . . . . . . . : 00-FF-15-B1-F7-40

    Read the article

  • EXTJS 3.2.1 EditorGridPanel - ComboBox with jsonstore

    - by Yoong Kim
    Hi, I am using EXTJS with an editorgridpanel and I am trying to to insert a combobox, populated with JsonStore. Here is a snapshot of my code: THE STORE: kmxgz.ordercmpappro.prototype.getCmpapproStore = function(my_url) { var myStore = new Ext.data.Store({ proxy: new Ext.data.HttpProxy({ url: my_url , method: 'POST' }) , reader: new Ext.data.JsonReader({ root: 'rows', totalProperty: 'total', id: 'list_cmpappro_id', fields: [ {name: 'list_cmpappro_id', mapping: 'list_cmpappro_id'} , {name: 'list_cmpappro_name', mapping: 'list_cmpappro_name'} ] }) , autoLoad: true , id: 'cmpapproStore' , listeners: { load: function(store, records, options){ //store is loaded, now you can work with it's records, etc. console.info('store load, arguments:', arguments); console.info('Store count = ', store.getCount()); } } }); return myStore; }; THE COMBO: kmxgz.ordercmpappro.prototype.getCmpapproCombo = function(my_store) { var myCombo = new Ext.form.ComboBox({ typeAhead: true, lazyRender:false, forceSelection: true, allowBlank: true, editable: true, selectOnFocus: true, id: 'cmpapproCombo', triggerAction: 'all', fieldLabel: 'CMP Appro', valueField: 'list_cmpappro_id', displayField: 'list_cmpappro_name', hiddenName: 'cmpappro_id', valueNotFoundText: 'Value not found.', mode: 'local', store: my_store, emptyText: 'Select a CMP Appro', loadingText: 'Veuillez patienter ...', listeners: { // 'change' will be fired when the value has changed and the user exits the ComboBox via tab, click, etc. // The 'newValue' and 'oldValue' params will be from the field specified in the 'valueField' config above. change: function(combo, newValue, oldValue){ console.log("Old Value: " + oldValue); console.log("New Value: " + newValue); }, // 'select' will be fired as soon as an item in the ComboBox is selected with mouse, keyboard. select: function(combo, record, index){ console.log(record.data.name); console.log(index); } } }); return myCombo; }; The combobox is inserted in an editorgridpanel. There's a renderer like this: Ext.util.Format.comboRenderer = function(combo){ return function(value, metadata, record){ alert(combo.store.getCount()); <== always 0!! var record = combo.findRecord(combo.valueField || combo.displayField, value); return record ? record.get(combo.displayField) : combo.valueNotFoundText; } }; When the grid is displayed the first time, instead of have the displayField, I have : "Value not found." And I have the alert : 0 (alert(combo.store.getCount())) from the renderer. But I can see in the console that the data have been correctly loaded! Even if I try to reload the store from the renderer (combo.store.load();), I still have the alert (0)! But when I select the combo to change the value, I can see the data and when I change the value, I can see the displayFiel! I don't understand what's the problem? Since now several days, I already tried all the solutions I found...but still nothing! Any advice is welcome! Yoong

    Read the article

  • How to convert a DataSet object into an ObjectContext (Entity Framework) object on the fly?

    - by Marcel
    Hi all, I have an existing SQL Server database, where I store data from large specific log files (often 100 MB and more), one per database. After some analysis, the database is deleted again. From the database, I have created both a Entity Framework Model and a DataSet Model via the Visual Studio designers. The DataSet is only for bulk importing data with SqlBulkCopy, after a quite complicated parsing process. All queries are then done using the Entity Framework Model, whose CreateQuery Method is exposed via an interface like this public IQueryable<TTarget> GetResults<TTarget>() where TTarget : EntityObject, new() { return this.Context.CreateQuery<TTarget>(typeof(TTarget).Name); } Now, sometimes my files are very small and in such a case I would like to omit the import into the database, but just have a an in-memory representation of the data, accessible as Entities. The idea is to create the DataSet, but instead of bulk importing, to directly transfer it into an ObjectContext which is accessible via the interface. Does this make sense? Now here's what I have done for this conversion so far: I traverse all tables in the DataSet, convert the single rows into entities of the corresponding type and add them to instantiated object of my typed Entity context class, like so MyEntities context = new MyEntities(); //create new in-memory context ///.... //get the item in the navigations table MyDataSet.NavigationResultRow dataRow = ds.NavigationResult.First(); //here, a foreach would be necessary in a true-world scenario NavigationResult entity = new NavigationResult { Direction = dataRow.Direction, ///... NavigationResultID = dataRow.NavigationResultID }; //convert to entities context.AddToNavigationResult(entity); //add to entities ///.... A very tedious work, as I would need to create a converter for each of my entity type and iterate over each table in the DataSet I have. Beware, if I ever change my database model.... Also, I have found out, that I can only instantiate MyEntities, if I provide a valid connection string to a SQL Server database. Since I do not want to actually write to my fully fledged database each time, this hinders my intentions. I intend to have only some in-memory proxy database. Can I do simpler? Is there some automated way of doing such a conversion, like generating an ObjectContext out of a DataSet object? P.S: I have seen a few questions about unit testing that seem somewhat related, but not quite exact.

    Read the article

  • Can Spring.Net function as PostSharp?

    - by Alex K
    A few months back I've discovered PostSharp, and for a time, it was good. But then legal came back with an answer saying that they don't like the licence of the old versions. Then the department told me that 2.0's price was unacceptably high (for the number of seats we need)... I was extremely disapponted, but not disheartened. Can't be the only such framework, I thought. I kept looking for a replacement, but most of it was either dead, ill maintained (especially in documentation department), for academic use, or all of the above (I'm looking at you Aspect.Net) Then I've discovered Spring.Net, and for a time, it was good. I've been reading the documentation and it went on to paint what seemed to be a supperior picture of an AOP nirvana. No longer was I locked to attributes to mark where I wanted code interception to take place, but it could be configured in XML and changes to it didn't require re-compile. Great. Then I looked at the samples and saw the following, in every single usage scenario: // Create AOP proxy using Spring.NET IoC container. IApplicationContext ctx = ContextRegistry.GetContext(); ICommand command = (ICommand)ctx["myServiceCommand"]; command.Execute(); if (command.IsUndoCapable) { command.UnExecute(); } Why must the first two lines of code exist? It ruins everything. This means I cannot simply provide a user with a set of aspects and attributes or XML configs that they can use by sticking appropriate attributes on the appropriate methods/classes/etc or editing the match pattern in XML. They have to modify their program logic to make this work! Is there a way to make Spring.Net behave as PostSharp in this case? (i.e. user only needs to add attributes/XML config, not edit content of any methods. Alternatively, is there a worthy and functioning alternative to PostSharp? I've seen a few question titled like this on SO, but none of them were actually looking to replace PostSharp, they only wanted to supplement its functionality. I need full replacement.

    Read the article

  • Hive NR map progress inconsistent and regurlarly restart from 0%

    - by user92471
    I have a Yarn MR (with two ec2 instances to mapreduce) job on a dataset of approximately a thousand avro records, and the map phase is behaving erratically. See the progress below. Of course i checked the logs on resourcemanager and nodemanagers and saw nothing suspicious, but these logs are too verbose What is going on there ? hive> select * from nikon where qs_cs_s_aid='VIEW' limit 10; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1352281315350_0020, Tracking URL = http://blabla.ec2.internal:8088/proxy/application_1352281315350_0020/ Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=blabla.com:8032 -kill job_1352281315350_0020 Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0 2012-11-07 11:14:40,976 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:15:06,136 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 10.38 sec 2012-11-07 11:15:07,253 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:08,371 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:09,491 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:10,643 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 15.42 sec (...) 2012-11-07 11:15:35,441 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec 2012-11-07 11:15:36,486 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec here restart at 16% ? 2012-11-07 11:15:37,692 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:38,815 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:39,865 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:41,064 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:42,181 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:43,299 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec here restart at 0% ? 2012-11-07 11:15:44,418 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:02,076 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:03,193 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:04,259 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 8.45 sec (...) 2012-11-07 11:16:31,291 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 35.34 sec 2012-11-07 11:16:32,414 Stage-1 map = 26%, reduce = 0%, Cumulative CPU 37.93 sec here restart at 11% ? 2012-11-07 11:16:33,459 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:34,507 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:35,731 Stage-1 map = 13%, reduce = 0%, Cumulative CPU 21.47 sec (...) 2012-11-07 11:16:46,839 Stage-1 map = 17%, reduce = 0%, Cumulative CPU 24.14 sec here restart at 0% ? 2012-11-07 11:16:47,939 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:56,653 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec 2012-11-07 11:16:57,814 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec (...) Needless to say the job crashes after some time with an Error: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: -56

    Read the article

  • Metro UsernameToken Policy

    - by Rodney
    I created a web services client prototype using api's available in weblogic 10.3. I've been told I need to use Metro 2.0 instead (it's already being used for other projects). The problem I have encounter is that the WSDL does not include any Security Policy information but a UsernameToken is required for each method call. In weblogic I was able to write my own policy xml file and instantiate my service with it (see below), however I can not seem to figure out how to do the same using Metro. Policy.xml <?xml version="1.0"?> <wsp:Policy xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200512"> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200512/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssUsernameToken10/> <sp:HashPassword/> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:Policy> Client.java (Weblogic) ClientPolicyFeature cpf = new ClientPolicyFeature(); InputStream asStream = WebServiceSoapClient.class.getResourceAsStream("Policy.xml"); cpf.setEffectivePolicy(new InputStreamPolicySource(asStream)); try { webService = new WebService(new URL("http://192.168.1.10/WebService/WebService.asmx?wsdl"), new QName("http://testme.com", "WebService")); } catch ( MalformedURLException e ) { e.printStackTrace(); } WebServiceSoap client = webService.getWebServiceSoap(new WebServiceFeature[] {cpf}); List<CredentialProvider> credProviders = new ArrayList<CredentialProvider>(); String username = "user"; String password = "pass"; CredentialProvider cp = new ClientUNTCredentialProvider(username.getBytes(), password.getBytes()); credProviders.add(cp); Map<String, Object> rc = ((BindingProvider) client).getRequestContext(); rc.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST, credProviders); ... I am able to generate my Proxy classes using Metro however I can not figure out how to configure it to send the UsernameToken. I have attempted several different examples from the web which have not worked. Any help would be appreciated.

    Read the article

  • Django TemplateSyntaxError only on live server (templates exist)

    - by Tom
    I'm getting a strange error that only occurs on the live server. My Django templates directory is set up like so base.html two-column-base.html portfolio index.html extranet base.html index.html The portfolio pages work correctly locally on multiple machines. They inherit from either the root base.html or two-column-base.html. However, now that I've posted them to the live box (local machines are Windows, live is Linux), I get a TemplateSyntaxError: "Caught TemplateDoesNotExist while rendering: base.html" when I try to load any portfolio pages. It seems to be a case where the extends tag won't work in that root directory (???). Even if I do a direct_to_template on two-column-base.html (which extends base.html), I get that error. The extranet pages all work perfectly, but those templates all live inside the /extranet folder and inherit from /extranet/base.html. Possible issues I've checked: file permissions on the server are fine the template directory is correct on the live box (I'm using os.path.dirname(os.path.realpath(__file__)) to make things work across machines) files exist and the /templates directories exactly match my local copy removing the {% extends %} block from the top of any broken template causes the templates to render without a problem manually starting a shell session and calling get_template on any of the files works, but trying to render it blows up with the same exception on any of the extended templates. Doing the same with base.html, it renders perfectly (base.html also renders via direct_to_template) Django 1.2, Python 2.6 on Webfaction. Apologies in advance because this is my 3rd or 4th "I'm doing something stupid" question in a row. The only x-factor I can think of is this is my first time using Mercurial instead ofsvn. Not sure how I could have messed things up via that. EDIT: One possible source of problems: local machine is Python 2.5, live is 2.6. Here's a traceback of me trying to render 'two-column-base.html', which extends 'base.html'. Both files are in the same directory, so if it can find the first, it can find the second. c is just an empty Context object. >>> render_to_string('two-column-base.html', c) Traceback (most recent call last): File "<console>", line 1, in <module> File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 186, in render_to_string return t.render(context_instance) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 173, in render return self._render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 167, in _render return self.nodelist.render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/__init__.py", line 796, in render bits.append(self.render_node(node, context)) File "/home/lightfin/webapps/django/lib/python2.6/django/template/debug.py", line 72, in render_node result = node.render(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader_tags.py", line 103, in render compiled_parent = self.get_parent(context) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader_tags.py", line 100, in get_parent return get_template(parent) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 157, in get_template template, origin = find_template(template_name) File "/home/lightfin/webapps/django/lib/python2.6/django/template/loader.py", line 138, in find_template raise TemplateDoesNotExist(name) TemplateSyntaxError: Caught TemplateDoesNotExist while rendering: base.html I'm wondering if this is somehow related to the template caching that was just added to Django. EDIT 2 (per lazerscience): template-related settings: import os PROJECT_ROOT = os.path.dirname(os.path.realpath(__file__)) TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT, 'templates'), ) sample view: def project_list(request, jobs, extra_context={}): context = { 'jobs': jobs, } print context context.update(extra_context) return render_to_response('portfolio/index.html', context, context_instance=RequestContext(request)) The templates in reverse-order are: http://thosecleverkids.com/junk/index.html http://thosecleverkids.com/junk/portfolio-base.html http://thosecleverkids.com/junk/two-column-base.html http://thosecleverkids.com/junk/base.html though in the real project the first two live in a directory called "portfolio".

    Read the article

  • WCF Webservices and FaultContract - Client's receiving SoapExc insted of FaultException<TDetails>

    - by Alessandro Di Lello
    Hi All, i'm developing a WCF Webservice and consuming it within a mvc2 application. My problem is that i'm using FaultContracts on my methods with a custom FaultDetail and i'm throwing manyally the faultexception but when the client receive the exception , it receives a normal SoapException instead of my FaultException that i throwed from the service side. Here is some code: Custom Fault Detail Class: [DataContract] public class MyFaultDetails { [DataMember] public string Message { get; set; } } Operation on service contract: [OperationContract] [FaultContract(typeof(MyFaultDetails))] void ThrowException(); Implementation: public void ThrowException() { var details = new MyFaultDetails { Message = "Exception Test" }; throw new FaultException<MyFaultDetails >(details , new FaultReason(details .Message), new FaultCode("MyFault")); } Client side: try { // Obv proxy init etc.. service.ThrowException(); } catch (FaultException<MyFaultDetails> ex) { // stuff } catch (Exception ex) { // stuff } What i expect is to catch the FaultException , instead that catch is skipped and the next catch is taken with an exception of type SoapException. Am i missing something ? i red a lot of threads about using faultcontracts within wcf and what i did seems to be good. I had a look at the wsdl and xsd generated and they look fine. here's a snippet regarding this method: <wsdl:operation name="ThrowException"> <wsdl:input wsaw:Action="http://tempuri.org/IAnyJobService/ThrowException" message="tns:IAnyJobService_ThrowException_InputMessage" /> <wsdl:output wsaw:Action="http://tempuri.org/IAnyJobService/ThrowExceptionResponse" message="tns:IAnyJobService_ThrowException_OutputMessage" /> <wsdl:fault wsaw:Action="http://tempuri.org/IAnyJobService/ThrowExceptionAnyJobServiceFaultExceptionFault" name="AnyJobServiceFaultExceptionFault" message="tns:IAnyJobService_ThrowException_AnyJobServiceFaultExceptionFault_FaultMessage" /> </wsdl:operation> <wsdl:operation name="ThrowException"> <soap:operation soapAction="http://tempuri.org/IAnyJobService/ThrowException" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> <wsdl:fault name="AnyJobServiceFaultExceptionFault"> <soap:fault use="literal" name="AnyJobServiceFaultExceptionFault" namespace="" /> </wsdl:fault> </wsdl:operation> Any help ? Thanks in advance Regards Alessandro

    Read the article

  • Something wrong with my XML?

    - by Prateek Raj
    hi everyone, i'm parsing an xml with my extjs but it returns only one of the five components. only the first one of the five components. Ext.regModel('Card', { fields: ['investor'] }); var store = new Ext.data.Store({ model: 'Card', proxy: { type: 'ajax', url: 'xmlformat.xml', reader: { type: 'xml', record: 'investors' } }, listeners: { single: true, datachanged: function(){ Ext.getBody().unmask(); var items = []; store.each(function(rec){ alert(rec.get('investor')); }); and my xml file is: <?xml version="1.0" encoding="UTF-8"?> <root> <investors> <investor>Active</investor> <investor>Aggressive</investor> <investor>Conservative</investor> <investor>Day Trader</investor> <investor>Very Active</investor> </investors> <events> <event>3 Month Expiry</event> <event>LEAPS</event> <event>Monthlies</event> <event>Monthly Expiries</event> <event>Weeklies</event> </events> <prices> <price>$0.5</price> <price>$0.05</price> <price>$1</price> <price>$22</price> <price>$100.34</price> </prices> </root> wen i run the code only "Active" comes out. . . . i know that i'm doing something wrong but i'm not sure what.... please help . . . . .

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >