Search Results

Search found 11465 results on 459 pages for 'css transforms'.

Page 435/459 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • nginx proxypath https redirects to http

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass however several pages load with 404s The links on the pages have https:// in front, but result in a http request - which ends in a 404 - I only want these services to be available through https. I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf, I've also tried commenting out www.conf (just incase its location blocks could have caused any conflicts) to no effect. So if a link is too https://example.com/sickbeard/errorlogs in a browser when loaded https://example.com/sickbeard/errorlogs gives a 404 in a browser https://example.com/sickbeard/errorlogs/ loads nginx error log; 2011/11/23 14:21:58 [error] 28882#0: *6 "/var/www/sickbeard/errorlogs/recent.html" is not found (2: No such file or directory), client: 192.168.1.99, server: example.com, request: "GET /sickbeard/errorlogs/ HTTP/1.1", host: "example.com" Config files; proxy.conf location /sickbeard { proxy_pass http://localhost:8081/sickbeard; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • Why isn't this rewrite rule (nginx) applied? (trying to setup Wordpress multisite)

    - by Brian Park
    Hi, I'm trying to setup Wordpress multisite (subfolder structure) with nginx, but having a problem with this rewrite rule. Below is the Apache's .htaccess, which I have to translate into nginx configuration. RewriteEngine On RewriteBase /blogs/ RewriteRule ^index\.php$ - [L] # uploaded files RewriteRule ^([_0-9a-zA-Z-]+/)?files/(.+) wp-includes/ms-files.php?file=$2 [L] # add a trailing slash to /wp-admin RewriteRule ^([_0-9a-zA-Z-]+/)?wp-admin$ $1wp-admin/ [R=301,L] RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^ - [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(wp-(content|admin|includes).*) $2 [L] RewriteRule ^([_0-9a-zA-Z-]+/)?(.*\.php)$ $2 [L] RewriteRule . index.php [L] Below is what I came up with: server { listen 80; server_name example.com; server_name_in_redirect off; expires 1d; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public; index index.html; try_files $uri $uri/ /index.html; # rewriting uploaded files rewrite ^/blogs/(.+/)?files/(.+) /blogs/wp-includes/ms-files.php?file=$2 last; # add a trailing slash to /wp-admin rewrite ^/blogs/(.+/)?wp-admin$ /blogs/$1wp-admin/ permanent; if (!-e $request_filename) { rewrite ^/blogs/(.+/)?(wp-(content|admin|includes).*) /blogs/$2 last; rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; } location /blogs/ { index index.php; #try_files $uri $uri/ /blogs/index.php?q=$uri&$args; } location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example.com/public$fastcgi_script_name; } # static assets location ~* ^.+\.(manifest)$ { access_log /srv/www/example.com/logs/static.log; } location ~* ^.+\.(ico|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { # only set expires max IFF the file is a static file and exists if (-f $request_filename) { expires max; access_log /srv/www/example.com/logs/static.log; } } } In the above code, I believe rewrite ^/blogs/(.+/)?(.*\.php)$ /blogs/$2 last; has no effect because when I look at the access_log file, I see the following line: 2010/09/15 01:14:55 [error] 10166#0: *8 "/srv/www/example.com/public/blogs/test/index.php" is not found (2: No such file or directory), request: "GET /blogs/test/ HTTP/1.1" (Here, 'test' is the second blog created using multisite feature) What I'm expecting is that /blogs/test/index.php gets rewritten to /blogs/index.php, but it doesn't seem to do that... Am I overlooking something obvious? Thanks!

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Nginx giving a lot of 502 errors

    - by Loki
    Since a while I have installed nginx and everything seemed to be working fine, recently I found out that about 20% of the time users are getting 502-errors. This is also noticable when Google tries to crawl my site in Webmaster Tools (from 10000 posts, approx. 2000 502 errors) At first I was thinking to disable nginx, but I'd really like to keep using it. I'm running it on a server with 2GB RAM and 4 Reserved CPU Cores. WHM/cPanel installed and Mod_Ruid2 enabled + DSO as a PHP Handler with APC caching installed. Is there anything I can change in the config, that can fix this? I have installed Nginx Admin in WHM and here is what's in the configuration editor screen: user nobody; worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 10240; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } I hope someone can help me out. Thanks in advance!

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • nginx + php fpm -> 404 php pages - file not found

    - by Mahesh
    *2037 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name .site.com; root /var/www/site; error_page 404 /404.php; access_log /var/log/nginx/site.access.log; index index.html index.php; if ($http_host != "www.site.com") { rewrite ^ http://www.site.com$request_uri permanent; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; #fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ~ /\. { access_log off; log_not_found off; deny all; } location ~ /(libraries|setup/frames|setup/libs) { deny all; return 404; } location ~ ^/uploads/(\d+)/(\d+)/(\d+)/(\d+)/(.*)$ { alias /var/www/site/images/missing.gif; #i need to modify this to show only missing files. right now it is showing missing for all the files. } location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; expires 20d; } location /user_uploads/ { location ~ .*\.(php)?$ { deny all; } } location ~ /\.ht { deny all; } } php-fpm config is default and is not touched. The problem is little strange for me. Error pages are showing File not found only if they are .php files. Other error files are clearly calling the 404.php file site.com/test = calls 404.php site.com/test.php = File not found. I am searching and making changes. but it hasn't solved the problem.

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • nginx+django serving static files

    - by avalore
    I have followed instruction for setting up django with nginx from the django wiki (https://code.djangoproject.com/wiki/DjangoAndNginx) and have nginx setup as follows (a few name changes to fit my setup). user nginx nginx; worker_processes 2; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /static/ { root /srv/static/; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov) { access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param REMOTE_ADDR $remote_addr; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } } Static files aren't being served (nginx 404). If I look in the access log it seems nginx is looking in /etc/nginx/html/static... rather than /srv/static/ as specified in the config. I've no clue why it's doing this, any help would be hugely appreciated.

    Read the article

  • Help needed setting up nginx to serve static files.

    - by Catalina
    Hi Guys, I'm trying to setup nginx to serve static files. Basically all I need is to have http://mydomain.com/site_media/ point to /var/django/myproject/site_media. I have tried so many configurations and when I test it I always get a 404 error for static files. Can anyone please tell me what I'm doing wrong or how I should be setting this up? This is my current nginx configuration file. user www-data; worker_processes 1; #error_log /usr/local/nginx/logs/error.log; #pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include mime.types; default_type application/octet-stream; #access_log /usr/local/nginx/logs/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; proxy_next_upstream error; server { listen 80; # Allow file uploads client_max_body_size 50M; location ^~ /site_media/ { root /var/django/myproject/site_media; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /site_media/favicon.ico; } location = /robots.txt { rewrite (.*) /site_media/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } #include /usr/local/nginx/sites-enabled/*; } Thanks, Cata

    Read the article

  • nginx rewrite rule to convert URL segments to query string parameters

    - by Nick
    I'm setting up an nginx server for the first time, and having some trouble getting the rewrite rules right for nginx. The Apache rules we used were: See if it's a real file or directory, if so, serve it, then send all requests for / to Director.php DirectoryIndex Director.php If the URL has one segment, pass it as rt RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] If the URL has two segments, pass it as rt and action RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] My nginx config file looks like: server { ... location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } How do I get the URL segments into Query String Parameters like in the Apache rules above? UPDATE 1 Trying Pothi's approach: # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires 30d; } location / { try_files $uri $uri/ /Director.php; rewrite "^/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1" last; rewrite "^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1&action=$2" last; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } This produces the output No input file specified. on every request. I'm not clear on if the .php location gets triggered (and subsequently passed to php) when a rewrite in any block indicates a .php file or not. UPDATE 2 I'm still confused on how to setup these location blocks and pass the parameters. location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME ${document_root}Director.php?rt=$1{$args}; include fastcgi_params; } UPDATE 3 It looks like the root directive was missing, which caused the No input file specified. message. Now that this is fixed, I get the index file as if the URL were / on every request regardless of the number of URL segments. It appears that my location regular expression is being ignored. My current config is: # This location is ignored: location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; set $args $query_string&rt=$1; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ /Director.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • How to configure apache's mod_proxy_html to work as an ajax proxy?

    - by dcerecedo
    I'm trying to build a web site that let's you view and manipulate data from any page in any other website. To do that, I have to bypass 'Allow Origin' problems: i'm loading the other domain's content in an iframe and i have to manipulate its content with javascript downloaded from my domain. My first attempt was to write a simple proxy myself, requesting the other domains page through a server proxy coded in Java that not only serves the content but rebuilds links (src's and href's) in the content so that the content referenced by these links alse get downloaded through my handmade proxy. The result is not bad but has problems with url's in css and scripts. It's then that i realized that mod_proxy_html is supposed to do exactly all this job. The problem is that i cannot figure out how to make it work as expected. Let's suppose my server runs in my-domain.com and to proxy and transform content from another domain i'd make a request like this: my-domain.com/proxy?url=http://another-domain.com/some/content I'd want mod_proxy_html to serve the content and rewrite following URLs in http://another-domain.com/some/content in the following ways: Absolute URLs not from another-domain.com: no rewritting Relative from root urls:/other/content - /proxy?url=http://another-domain.com/other/content Relative urls: other/content - /proxy?url=http://another-domain.com/some/content/other/content Relative to parent urls: ../other/content - /proxy?url=http://another-domain.com/some/other/content The url should be specified at runtime, not configuration time. Can this be achieved with mod_proxy_html? Could anyone provide a simple working configuration to start with? EDIT 1-First approach The following site config will work fine with sites that use absolute url's everywhere like http://www.huffingtonpost.es/. Youc could try on this config on localhost: http://localhost/asset/http://www.huffingtonpost.es/ <VirtualHost *:80> ServerName localhost LogLevel debug ProxyRequests off RewriteEngine On RewriteRule ^/asset/(.*) $1 [P] ProxyHTMLURLMap $1 /asset/ <Location /asset/> ProxyPassReverse / ProxyHTMLURLMap / /asset/ </Location> </VirtualHost> But as explained in the documentation, if I hit a site using relative url's, I'd like to have these rewritten on the html via mod_proxy_html. So I shoud change the Location block as follows: <Location /asset/> ProxyPassReverse / #Depending on your system use one line or the other #Ubuntu: #SetOutputFilter proxy-html #any other system: ProxyHTMLEnable On ProxyHTMLURLMap / /asset/ </Location> ...which doesn't seem to work. Comments, hints and ideas welcome!

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Enabling Http caching and compression in IIS 7 for asp.net websites

    - by anil.kasalanati
    Caching – There are 2 ways to set Http caching 1-      Use Max age property 2-      Expires header. Doing the changes via IIS Console – 1.       Select the website for which you want to enable caching and then select Http Responses in the features tab       2.       Select the Expires webcontent and on changing the After setting you can generate the max age property for the cache control    3.       Following is the screenshot of the headers   Then you can use some tool like fiddler and see 302 response coming from the server. Doing it web.config way – We can add static content section in the system.webserver section <system.webServer>   <staticContent>             <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365.00:00:00" />   </staticContent> Compression - By default static compression is enabled on IIS 7.0 but the only thing which falls under that category is CSS but this is not enough for most of the websites using lots of javascript.  If you just thought by enabling dynamic compression would fix this then you are wrong so please follow following steps –   In some machines the dynamic compression is not enabled and following are the steps to enable it – Open server manager Roles > Web Server (IIS) Role Services (scroll down) > Add Role Services Add desired role (Web Server > Performance > Dynamic Content Compression) Next, Install, Wait…Done!   ?  Roles > Web Server (IIS) ?  Role Services (scroll down) > Add Role Services     Add desired role (Web Server > Performance > Dynamic Content Compression)     Next, Install, Wait…Done!     Enable  - ?  Open server manager ?  Roles > Web Server (IIS) > Internet Information Services (IIS) Manager   Next pane: Sites > Default Web Site > Your Web Site Main pane: IIS > Compression         Then comes the custom configuration for encrypting javascript resources. The problem is that the compression in IIS 7 completely works on the mime types and by default there is a mismatch in the mime types Go to following location C:\Windows\System32\inetsrv\config Open applicationHost.config The mimemap is as follows  <mimeMap fileExtension=".js" mimeType="application/javascript" />   So the section in the staticTypes should be changed          <add mimeType="application/javascript" enabled="true" />     Doing the web.config way –   We can add following section in the system.webserver section <system.webServer> <urlCompression doDynamicCompression="false"  doStaticCompression="true"/> More Information/References – ·         http://weblogs.asp.net/owscott/archive/2009/02/22/iis-7-compression-good-bad-how-much.aspx ·         http://www.west-wind.com/weblog/posts/98538.aspx  

    Read the article

  • Handy ASP.NET MVC 2 Extension Methods &ndash; Where am I?

    - by Bobby Diaz
    Have you ever needed to detect what part of the application is currently being viewed?  This might be a bigger issue if you write a lot of shared/partial views or custom display or editor templates.  Another scenario, which is the one I encountered when I first started down this path, is when you have some type of menu and you’d like to be able to determine which item represents the current page so you can highlight it in some way.  A simple example is the menu that is created as part of the default ASP.NET MVC 2 Application template.   <div id="menucontainer">       <ul id="menu">         <li><%= Html.ActionLink("Home", "Index", "Home") %></li>         <li><%= Html.ActionLink("About", "About", "Home") %></li>     </ul>   </div>   The part that got me at first, however, was the following entry in the default style sheet (Site.css):   ul#menu li.selected a {     background-color: #fff;     color: #000; }   I assumed that the .selected class would automatically get applied to the active menu item.  After trying a few different things, including the MvcContrib MenuBuilder, I decided to write my own extension methods so I would have more control over the output.  First, I needed a way to determine what view the user has navigated to based on the requested URL and route configuration.  Now, I am sure there are many ways to do this, but this is what I came up with:   public static class RequestExtensions {     public static bool IsCurrentRoute(this RequestContext context, String areaName,         String controllerName, params String[] actionNames)     {         var routeData = context.RouteData;         var routeArea = routeData.DataTokens["area"] as String;         var current = false;           if ( ((String.IsNullOrEmpty(routeArea) && String.IsNullOrEmpty(areaName)) ||               (routeArea == areaName)) &&              ((String.IsNullOrEmpty(controllerName)) ||               (routeData.GetRequiredString("controller") == controllerName)) &&              ((actionNames == null) ||                actionNames.Contains(routeData.GetRequiredString("action"))) )         {             current = true;         }           return current;     }       // additional overloads omitted... }   With that in place, I was able to write several UrlHelper methods that check if the supplied values map to the current view.   public static class UrlExtensions {     public static bool IsCurrent(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.RequestContext.IsCurrentRoute(areaName, controllerName, actionNames);     }       public static string Selected(this UrlHelper urlHelper, String areaName,         String controllerName, params String[] actionNames)     {         return urlHelper.IsCurrent(areaName, controllerName, actionNames)             ? "selected" : String.Empty;     }       // additional overloads omitted... }   Now I can re-work the original menu to utilize these new methods.  Note: be sure to import the proper namespace so the extension methods become available inside your views!   <div id="menucontainer">       <ul id="menu">         <li class="<%= Url.Selected(null, "Home", "Index") %>">             <%= Html.ActionLink("Home", "Index", "Home")%></li>           <li class="<%= Url.Selected(null, "Home", "About") %>">             <%= Html.ActionLink("About", "About", "Home")%></li>     </ul>   </div>   If we take it one step further, we can clean up the markup even more.  Check out the Html.ActionMenuItem() extension method and the refined menu:   public static class HtmlExtensions {     public static MvcHtmlString ActionMenuItem(this HtmlHelper htmlHelper, String linkText,         String actionName, String controllerName)     {         var html = new StringBuilder("<li");           if ( htmlHelper.ViewContext.RequestContext                 .IsCurrentRoute(null, controllerName, actionName) )         {             html.Append(" class=\"selected\"");         }           html.Append(">")             .Append(htmlHelper.ActionLink(linkText, actionName, controllerName))             .Append("</li>");           return MvcHtmlString.Create(html.ToString());     }       // additional overloads omitted... }   <div id="menucontainer">       <ul id="menu">         <%= Html.ActionMenuItem("Home", "Index", "Home") %>         <%= Html.ActionMenuItem("About", "About", "Home") %>     </ul>   </div>   Which generates the following HTML:   <div id="menucontainer">       <ul id="menu">         <li class="selected"><a href="/">Home</a></li>         <li><a href="/Home/About">About</a></li>     </ul>   </div>     I have created a codepaste of these extension methods if you are interested in using them in your own projects.  Enjoy!

    Read the article

  • Winners of the Oracle Excellence Award—Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Did you get a chance to attend Oracle OpenWorld in San Francisco? With 60,000 attendees and hundreds of sessions to choose from—there was a lot going on. One of my favorite sessions was the Eco-Enterprise Awards and Sustainability Executive Panel Discussion. During this session, Jeff Henley, Oracle Chairman of the Board, announced the winners of the 2013 Oracle Excellence Award—Eco-Enterprise Innovation. It was an enlightening session as we heard several of the winning customers discuss the importance of sustainability to their company and how they’re using various Oracle products to help with their sustainability initiatives. The winning customers include: Centennial Coal, Indaver nv, Korea Enterprise Data, National Guard Health Affairs, Schneider National, SThree, Telstra International Group, Trex Company, University of Salzburg, Walmart, and Yeoncheon County Office. Stay tuned for additional blogs where you’ll learn more about these winning companies’ environmental best practices and why they won this award. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Several partners were also recognized for helping these customers with their sustainability initiatives. Those partners include: CSS International, Daesang Information Technology, i4BI, Infosys, Knowledge Global, Solutions for Retails Brands Limited, and SysGen. During this same session, Jeff Henley also awarded Robert Kaplan, Director of Sustainability at Walmart, with Oracle’s Chief Sustainability Officer of the Year award. Robert was honored for helping improve Walmart’s supply chain efficiency with their Sustainability Hub. The Sustainability Hub, powered by Oracle Service Cloud, is a central location for Walmart suppliers, associates and business partners to learn, connect, inspire and drive sustainability through collaboration. While at Oracle OpenWorld, I also got a chance to hear Robert Kaplan discuss their Sustainability Hub during an Oracle OpenWorld Live taping. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Improve the Quality of ePub eBooks with Sigil

    - by Matthew Guay
    Would you like to correct errors in your ePub formatted eBooks, or even split them into chapters and create a Table of Contents?  Here’s how you can with the free program Sigil. eBooks are increasingly popular with the rise of eBook readers and reading apps on mobile devices.  We recently showed you how to convert a PDF eBook to ePub format, but as you may have noticed, sometimes the converted file had some glitches or odd formatting.  Additionally, many of the many free ePub books available online from sources like the Project Guttenberg do not include a table of contents.  Sigil is a free application for Windows, OS X, and Linux that lets you edit ePub files, so let’s look at how you can use it to improve your eBooks. Note: Sigil took several moments to open files in our tests, and froze momentarily when we maximized the window.  Sigil is currently pre-release software in active development, so we would expect the bugs to be worked out in future versions.  As usual, only install if you’re comfortable testing pre-release software. Getting Started Download Sigil (link below), making sure to select the correct version for your computer.  Run the installer, and select your preferred setup language when prompted. After a moment the installer will appear; setup as normal. Launch Sigil when it’s finished installing.  It opens with a default blank ePub file, so you could actually start writing an eBook from scratch right here. Edit Your ePub eBooks Now you’re ready to edit your ePub books.  Click Open and browse to the file you want to edit. Now you can double-click any of the HTML or XHTML files on the left sidebar and edit them just like you would in Word. Or you can choose to view it in Code View and edit the actual HTML directly. The sidebar also gives you access to the other parts of the ePub file, such as Images and CSS styles. If your ePub file has a Table of Contents, you can edit it with Sigil as well.  Click Tools in the menu bar, and then select TOC Editor.  Strangely there is no way to create a new table of contents, but you can remove entries from existing one.   Convert TXT Files to ePub Many free eBooks online, especially older, out of copyright titles, are available in plain text format.  One problem with these files is that they usually use hard returns at the end of lines, so they don’t reflow to fill your screen efficiently. Sigil can easily convert these files to the more useful ePub format.  Open the text file in Sigil, and it will automatically reflow the text and convert it ePub.  As you can see in the screenshot below, the text in the eBook does not have hard line-breaks at the end of each line, and will be much more readable on mobile devices. Note that Sigil may take several moments opening the book, and may even become unresponsive while analyzing it.   Now you can edit your eBook, split it into chapters, or just save it as is.  Either way, make sure to select Save as to save your book as ePub format. Conclusion As mentioned before, Sigil seems to run slow at times, especially when editing large eBooks.  But it’s still a nice solution to edit and extend your ePub eBooks, and even convert plain text eBooks to the nicer ePub format.  Now you can make your eBooks work just like you want, and read them on your favorite device! If you feel comfortable editing HTML files, check out our article on how to edit ePub eBooks with your favorite HTML editor. Link Download Sigil from Google Code Download free ePub eBooks from Project Guttenberg Similar Articles Productive Geek Tips Edit ePub eBooks with Your Favorite HTML EditorConvert a PDF eBook to ePub Format for Your iPad, iPhone, or eReaderRead Mobi eBooks on Kindle for PCFriday Fun: Watch HD Video Content with MeevidPreview and Purchase Ebooks with Kindle for PC TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >