Search Results

Search found 20586 results on 824 pages for 'virtual methods'.

Page 315/824 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Postfix Submission port issue

    - by RevSpot
    I have setup postfix+mailman on my debian server and i have an issue with postfix submission port. My ISP blocks SMTP on port 25 to prevent *spams and i must to use submission port (587). I have uncomment the following line from master.cf (/etc/postfix/) but nothing happens. submission inet n - - - - smtpd This is my mail logs file when i try to invite a user to mailman list Nov 6 00:35:34 myhostname postfix/qmgr[1763]: C90BF1060D: from=<[email protected]>, size=1743, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: DF54B10608: from=<[email protected]>, size=488, nrcpt=1 (queue active) Nov 6 00:35:34 myhostname postfix/qmgr[1763]: 80F0D10609: from=<[email protected]>, size=483, nrcpt=1 (queue active) Nov 6 00:35:55 myhostname postfix/smtp[2269]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2270]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:35:55 myhostname postfix/smtp[2271]: connect to gmail-smtp-in.l.google.com[173.194.70.27]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2269]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2270]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:16 myhostname postfix/smtp[2271]: connect to alt1.gmail-smtp-in.l.google.com[74.125.143.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2269]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname postfix/smtp[2270]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:37 myhostname4 postfix/smtp[2271]: connect to alt2.gmail-smtp-in.l.google.com[74.125.141.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2269]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2270]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:36:58 myhostname postfix/smtp[2271]: connect to alt3.gmail-smtp-in.l.google.com[173.194.64.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2269]: C90BF1060D: to=<[email protected]>, relay=none, delay=23711, delays=23606/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out Nov 6 00:37:19 myhostname postfix/smtp[2270]: DF54B10608: to=<[email protected]>, relay=none, delay=23882, delays=23777/0.03/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) Nov 6 00:37:19 myhostname postfix/smtp[2271]: 80F0D10609: to=<[email protected]>, relay=none, delay=23875, delays=23770/0.04/105/0, dsn=4.4.1, status=deferred (connect to alt4.gmail-smtp-in.l.google.com[74.125.142.26]:25: Connection timed out) main.cf smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.mydomain.com, localhost.mydomain.com,localhost relayhost = relay_domains = $mydestination, mail.mydomain.com relay_recipient_maps = hash:/var/lib/mailman/data/virtual-mailman transport_maps = hash:/etc/postfix/transport mailman_destination_recipient_limit = 1 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all local_recipient_maps = master.cf smtp inet n - - - - smtpd submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache # # ==================================================================== # # maildrop. See the Postfix MAILDROP_README file for details. # Also specify in main.cf: maildrop_destination_recipient_limit=1 # maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # ==================================================================== # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}

    Read the article

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • 500 Internal Server Error with PHP application

    - by James
    I have written a PHP application using Windows and XAMPP. I've been trying to run it on Ubuntu 10.10 with Lighttpd 1.4.26. Parts of the application work fine, but whenever I try to log in, I get a 500 - Internal Server Error page. The only thing that shows up in /var/log/lighttpd/error.log is 2011-02-25 13:43:13: (mod_fastcgi.c.2582) unexpected end-of-file (perhaps the fastcgi process died): pid: 1169 socket: unix:/tmp/php.socket-0 2011-02-25 13:43:13: (mod_fastcgi.c.3367) response not received, request sent: 1596 on socket: unix:/tmp/php.socket-0 for /~denton/customer-facing-portal/index.php?, closing connection If I had any output whatsoever from PHP, this would be a lot easier to debug. Any ideas on how to get some? Here is my /etc/lighttpd/lighttpd.conf file: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load server.modules = ( "mod_alias", "mod_compress", # "mod_rewrite", # "mod_redirect", # "mod_usertrack", # "mod_expire", # "mod_flv_streaming", # "mod_evasive", "mod_setenv" ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to upload files to, purged daily. server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## Use ipv6 only if available. (disabled for while, check #560837) #include_shell "/usr/share/lighttpd/use-ipv6.pl" ## bind to port (default: 80) # server.port = 81 ## bind to localhost only (default: all interfaces) ## server.bind = "localhost" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts server.pid-file = "/var/run/lighttpd.pid" ## ## Format: <errorfile-prefix><status>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/var/www/" ## virtual directory listings dir-listing.encoding = "utf-8" server.dir-listing = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't change) server.username = "www-data" ## change gid to <gid> (default: don't change) server.groupname = "www-data" #### compress module compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ("text/plain", "text/html", "application/x-javascript", "text/css") #### url handling modules (rewrite, redirect, access) # url.rewrite = ( "^/$" => "/server-status" ) # url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### expire module # expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "access plus 1 seconds 2 minutes") #### external configuration files ## mimetype mapping include_shell "/usr/share/lighttpd/create-mime.assign.pl" ## load enabled configuration files, ## read /etc/lighttpd/conf-available/README first include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ## Set environment variables setenv.add-environment = ( "DB_URL__DEMO" => "192.168.1.231", "DB_NAME_DEMO" => "demo", "DB_USER_DEMO" => "user", "DB_PASS_DEMO" => "password", "DB_AGENCY_DEMO" => "demo" ) Here is my /etc/php5/cgi/php.ini file (sans 1641 lines of comments): [PHP] register_long_arrays = Off short_open_tag = Off engine = On short_open_tag = Off asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED & ~E_STRICT display_errors = On display_startup_errors = On log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = On html_errors = On variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off cgi.fix_pathinfo=1 file_uploads = On upload_max_filesize = 2M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] date.timezone = "America/Chicago" [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba] Update: here is /etc/lighttpd/conf-enabled/15-fastcgi-php.conf As far as I know, it's just the default config file the Ubuntu package installed. ## FastCGI programs have the same functionality as CGI programs, ## but are considerably faster through lower interpreter startup ## time and socketed communication ## ## Documentation: /usr/share/doc/lighttpd-doc/fastcgi.txt.gz ## http://redmine.lighttpd.net/projects/lighttpd/wiki/Docs:ConfigurationOptions#mod_fastcgi-fastcgi ## Start an FastCGI server for php (needs the php5-cgi package) fastcgi.server += ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) )

    Read the article

  • SSH not working over IPSec tunnel (Strongswan)

    - by PattPatel
    I configured a small network on a cloud virtual machine. This virtual machine has a static IP address assigned to eth0 interface that I'll call $EXTIP. mydomain.com points to $EXTIP. Inside, I have some linux containers, that get their ip through DHCP in the Subnet 10.0.0.0/24 (i called the virtual interface nat ). They run some services that can be reached through DNAT. Then I wanted to connect to these containers through an IPSec tunnel, so I configured StrongSwan. ipsec.conf: conn %default dpdaction=none rekey=no conn remote keyexchange=ikev2 ike=######## left=[$EXTIP] leftsubnet=10.0.1.0/24,10.0.0.0/24 leftauth=pubkey lefthostaccess=yes leftcert=########.pem leftfirewall=yes leftid="#########" right=%any rightsourceip=10.0.1.0/24 rightauth=######## rightid=%any rightsendcert=never eap_identity=%any auto=add type=tunnel Everything works fine, IPSec clients get IPs of the 10.0.1.0/24 subnet and can reach the containers subnet. My problem is that I'm not able to get SSH connections over the tunnel. It simply does not work, ssh client does not produce any output. Sniffing with tcpdump gives: tcpdump: 09:50:29.648206 ARP, Request who-has 10.0.0.1 tell mydomain.com, length 28 09:50:29.648246 ARP, Reply 10.0.0.1 is-at 00:ff:aa:00:00:01 (oui Unknown), length 28 09:50:29.648253 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [S], seq 4007849772, win 29200, options [mss 1460,sackOK,TS val 1151153 ecr 0,nop,wscale 7], length 0 09:50:29.648296 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [S.], seq 2809522632, ack 4007849773, win 14480, options [mss 1460,sackOK,TS val 11482992 ecr 1151153,nop,wscale 6], length 0 09:50:29.677225 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 2809522633, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 0 09:50:29.679370 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 0:23, ack 1, win 229, options [nop,nop,TS val 1151162 ecr 11482992], length 23 09:50:29.679403 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483002 ecr 1151162], length 0 09:50:29.684337 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1:32, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 31 09:50:29.685471 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1448 09:50:29.685519 IP mydomain.com > 10.0.0.1: ICMP mydomain.com unreachable - need to frag (mtu 1422), length 556 09:50:29.685567 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 32:1402, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 1370 09:50:29.685572 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], seq 1402:1480, ack 24, win 227, options [nop,nop,TS val 11483003 ecr 1151162], length 78 09:50:29.714601 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 32, win 229, options [nop,nop,TS val 1151173 ecr 11483003], length 0 09:50:29.714642 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483012 ecr 1151173], length 120 09:50:29.723649 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [P.], seq 1393:1959, ack 32, win 229, options [nop,nop,TS val 1151174 ecr 11483003], length 566 09:50:29.723677 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [.], ack 24, win 227, options [nop,nop,TS val 11483015 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 0 09:50:29.725688 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1480, win 251, options [nop,nop,TS val 1151177 ecr 11483003], length 0 09:50:29.952394 IP 10.0.0.1.ssh > 10.0.1.2.54869: Flags [P.], seq 1480:1600, ack 24, win 227, options [nop,nop,TS val 11483084 ecr 1151173,nop,nop,sack 1 {1394:1960}], length 120 09:50:29.981056 IP mydomain.com.54869 > 10.0.0.1.ssh: Flags [.], ack 1600, win 251, options [nop,nop,TS val 1151253 ecr 11483084,nop,nop,sack 1 {1480:1600}], length 0 If you need it this is my iptables configuration file: iptables: *filter :INPUT ACCEPT [144:9669] :FORWARD DROP [0:0] :OUTPUT ACCEPT [97:15649] :interfacce-trusted - [0:0] :porte-trusted - [0:0] -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j interfacce-trusted -A FORWARD -j porte-trusted -A FORWARD -j REJECT --reject-with icmp-host-unreachable -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A interfacce-trusted -i nat -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 80 -j ACCEPT -A porte-trusted -d 10.0.0.1/32 -p tcp -m tcp --dport 443 -j ACCEPT -A porte-trusted -d 10.0.0.3/32 -p tcp -m tcp --dport 1234 -j ACCEPT COMMIT *nat :PREROUTING ACCEPT [10:600] :INPUT ACCEPT [10:600] :OUTPUT ACCEPT [4:268] :POSTROUTING ACCEPT [18:1108] -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443 -A PREROUTING -d [$EXTIP] -p tcp -m tcp --dport 8069 -j DNAT --to-destination 10.0.0.3:1234 -A POSTROUTING -s 10.0.0.0/24 -o eth0 -m policy --dir out --pol ipsec -j ACCEPT -A POSTROUTING -s 10.0.1.0/24 -o nat -j MASQUERADE -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE COMMIT Probably I'm missing something stupid... Thanks in advance for helping :))

    Read the article

  • Citrix Xen VM's lose networking

    - by Ash
    My client has a XenServer 6.0.2 installation with 2 Window Server 2008 R2 virtual machines. Whenever the virtual machines are rebooted they lose their IP settings (IP address, subnet, gateway). Each time after a reboot I need to login to each VM via XenCenter and re-apply the required static IP settings. This causes issues with connected iSCSI drives within each VM - drives need to be reconnected after each reboot. For example, a network adapter has the following settings pre-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 IPv4 Address. . . . . . . . . . . : 10.101.0.101(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.101.0.10 DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Post-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 Autoconfiguration IPv4 Address. . : 169.254.153.174(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Under XenCenter -- Virtual Network Interfaces, each adapter is set to a static MAC address (i.e. "Use this MAC address"). I have tried the following commands within one VM but this had no effect: netsh winsock reset catalog netsh int ip reset Can someone please help?

    Read the article

  • non-static variable cannot be referenced from a static context (java)

    - by Greg
    I ask that you ignore all logic.. i was taught poorly at first and so i still dont understand everything about static crap and its killing me. My error is with every single variable that i declare then try to use later inside my methods... i get the non-static variable cannot~~ error I can simply put all the rough coding of my methods inside my cases, and it works but then i cannot use recursion... What i really need is someone to help on the syntax and point me on the right direction of how to have my methods recognize my variables at the top... like compareCount etc thanks public class MyProgram7 { public static void main (String[]args) throws IOException{ Scanner scan = new Scanner(System.in); int compareCount = 0; int low = 0; int high = 0; int mid = 0; int key = 0; Scanner temp; int[]list; String menu, outputString; int option = 1; boolean found = false; // Prompt the user to select an option menu = "\n\t1 Reads file" + "\n\t2 Finds a specific number in the list" + "\n\t3 Prints how many comparisons were needed" + "\n\t0 Quit\n\n\n"; System.out.println(menu); System.out.print("\tEnter your selection: "); option = scan.nextInt(); // Keep reading data until the user enters 0 while (option != 0) { switch (option) { case 1: readFile(); break; case 2: findKey(list,low,high,key); break; case 3: printCount(); break; default: outputString = "\nInvalid Selection\n"; System.out.println(outputString); break; }//end of switch System.out.println(menu); System.out.print("\tEnter your selection: "); option = scan.nextInt(); }//end of while }//end of main public static void readFile() { int i = 0; temp = new Scanner(new File("CS1302_Data7_2010.txt")); while(temp.hasNext()) { list[i]= temp.nextInt(); i++; }//end of while temp.close(); System.out.println("File Found..."); }//end of readFile() public static void findKey() { while(found!=true) { while(key < 100 || key > 999) { System.out.println("Please enter the number you would like to search for? ie 350: "); key = scan.nextInt(); }//end of inside while //base if (low <= high) { mid = ((low+high)/2); if (key == list[mid]) { found = true; compareCount++; }//end of inside if }//end of outside if else if (key < list[mid]) { compareCount++; high = mid-1; findKey(list,low,high,key); }//end of else if else { compareCount++; low = mid+1; findKey(list,low,high,key); }//end of else }//end of outside while }//end of findKey() public static void printCount() { System.out.println("Total number of comparisons is: " + compareCount); }//end of printCount }//end of class

    Read the article

  • How do you automap List<float> or float[] with Fluent NHibernate?

    - by Tom Bushell
    Having successfully gotten a sample program working, I'm now starting to do Real Work with Fluent NHibernate - trying to use Automapping on my project's class heirarchy. It's a scientific instrumentation application, and the classes I'm mapping have several properties that are arrays of floats e.g. private float[] _rawY; public virtual float[] RawY { get { return _rawY; } set { _rawY = value; } } These arrays can contain a maximum of 500 values. I didn't expect Automapping to work on arrays, but tried it anyway, with some success at first. Each array was auto mapped to a BLOB (using SQLite), which seemed like a viable solution. The first problem came when I tried to call SaveOrUpdate on the objects containing the arrays - I got "No persister for float[]" exceptions. So my next thought was to convert all my arrays into ILists e.g. public virtual IList<float> RawY { get; set; } But now I get: NHibernate.MappingException: Association references unmapped class: System.Single Since Automapping can deal with lists of complex objects, it never occured to me it would not be able to map lists of basic types. But after doing some Googling for a solution, this seems to be the case. Some people seem to have solved the problem, but the sample code I saw requires more knowledge of NHibernate than I have right now - I didn't understand it. Questions: 1. How can I make this work with Automapping? 2. Also, is it better to use arrays or lists for this application? I can modify my app to use either if necessary (though I prefer lists). Edit: I've studied the code in Mapping Collection of Strings, and I see there is test code in the source that sets up an IList of strings, e.g. public virtual IList<string> ListOfSimpleChildren { get; set; } [Test] public void CanSetAsElement() { new MappingTester<OneToManyTarget>() .ForMapping(m => m.HasMany(x => x.ListOfSimpleChildren).Element("columnName")) .Element("class/bag/element").Exists(); } so this must be possible using pure Automapping, but I've had zero luck getting anything to work, probably because I don't have the requisite knowlege of manually mapping with NHibernate. Starting to think I'm going to have to hack this (by encoding the array of floats as a single string, or creating a class that contains a single float which I then aggregate into my lists), unless someone can tell me how to do it properly. End Edit Here's my CreateSessionFactory method, if that helps formulate a reply... private static ISessionFactory CreateSessionFactory() { ISessionFactory sessionFactory = null; const string autoMapExportDir = "AutoMapExport"; if( !Directory.Exists(autoMapExportDir) ) Directory.CreateDirectory(autoMapExportDir); try { var autoPersistenceModel = AutoMap.AssemblyOf<DlsAppOverlordExportRunData>() .Where(t => t.Namespace == "DlsAppAutomapped") .Conventions.Add( DefaultCascade.All() ) ; sessionFactory = Fluently.Configure() .Database(SQLiteConfiguration.Standard .UsingFile(DbFile) .ShowSql() ) .Mappings(m => m.AutoMappings.Add(autoPersistenceModel) .ExportTo(autoMapExportDir) ) .ExposeConfiguration(BuildSchema) .BuildSessionFactory() ; } catch (Exception e) { Debug.WriteLine(e); } return sessionFactory; }

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • problem to create session of facebook login

    - by khoyendra
    import com.facebook.api.FacebookRestClient; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.HttpState; import org.apache.commons.httpclient.NameValuePair; import org.apache.commons.httpclient.methods.GetMethod; import org.apache.commons.httpclient.methods.PostMethod; import org.apache.commons.httpclient.params.HttpClientParams; public class FaceLogin { public FaceLogin(){ getUserID("xxxxxx", "xxxxxx"); } private static void getUserID(String email, String password){ String session = null; try { HttpClient http = new HttpClient(); http.setParams(new HttpClientParams()); http.setState(new HttpState()); String api_key = "API KEY"; String secret = "SECRETS"; FacebookRestClient client = new FacebookRestClient(api_key, secret); client.setIsDesktop(true); String token = client.auth_createToken(); final String loginId = "http://www.facebook.com/login.php"; GetMethod get = new GetMethod(loginId + "?api_key=" + api_key + "&v=1.0&auth_token=" +token); System.out.println("Get="+get); http.executeMethod(get); PostMethod post = new PostMethod(loginId); post.addParameter(new NameValuePair("api_key", api_key)); post.addParameter(new NameValuePair("v", "1.0")); post.addParameter(new NameValuePair("auth_token", token)); post.addParameter(new NameValuePair("fbconnect","true")); post.addParameter(new NameValuePair("return_session","true")); post.addParameter(new NameValuePair("session_key_only","true")); post.addParameter(new NameValuePair("req_perms","read_stream,publish_stream")); post.addParameter(new NameValuePair("lsd","8HYdi")); post.addParameter(new NameValuePair("locale","en_US")); post.addParameter(new NameValuePair("persistent","1")); post.addParameter(new NameValuePair("email", email)); post.addParameter(new NameValuePair("pass", password)); System.out.println("Token ="+token); int postStatus = http.executeMethod(post); System.out.println("Response : " + postStatus); session = client.auth_getSession(token); // Here I am getting error System.out.println("Session string: " + session); long userid = client.users_getLoggedInUser(); System.out.println("User Id is : " + userid); } catch (Exception e) { e.printStackTrace(); } } public static void main(String k[]) { FacebookLogin facebookLoginObj=new FacebookLogin(); } } here i have to find some error when i create session error is run: Get=org.apache.commons.httpclient.methods.GetMethod@17ec9f7 Token =0c578e0692ae04327cd29a4beede48e3 Jun 8, 2010 7:04:48 PM org.apache.commons.httpclient.HttpMethodBase processResponseHeaders WARNING: Cookie rejected: "$Version=0; $Domain=deleted; $Path=/; $Domain=.facebook.com". Cookie name may not start with $ Response : 200 Jun 8, 2010 7:04:48 PM org.apache.commons.httpclient.HttpMethodBase processResponseHeaders WARNING: Cookie rejected: "$Version=0; $Path=deleted; $Path=/; $Domain=.facebook.com". Cookie name may not start with $ Facebook returns error code 100 - v -> 1.0 - auth_token -> 0c578e0692ae04327cd29a4beede48e3 - method -> facebook.auth.getSession com.facebook.api.FacebookException: Invalid parameter - call_id -> 1276004088734 - api_key -> f7cb1e48c383ef599da9021fc4dec322 - sig -> 8b7f0a5394b25551ab3cf1487ac0da00 at com.facebook.api.FacebookRestClient.callMethod(FacebookRestClient.java:828) at com.facebook.api.FacebookRestClient.callMethod(FacebookRestClient.java:606) at com.facebook.api.FacebookRestClient.callMethod(FacebookRestClient.java:606) at com.facebook.api.FacebookRestClient.auth_getSession(FacebookRestClient.java:1891) at facebookcrawler.FacebookLogin.getUserID(FacebookLogin.java:81) at facebookcrawler.FacebookLogin.<init>(FacebookLogin.java:24) at facebookcrawler.FaceLogin.main(FaceLogin.java:80) BUILD SUCCESSFUL (total time: 7 seconds)

    Read the article

  • JAXB marshals XML differently to OutputStream vs. StringWriter

    - by Andy
    I apologize if this has been answered, but the search terms I have been using (i.e. JAXB @XmlAttribute condensed or JAXB XML marshal to String different results) aren't coming up with anything. I am using JAXB to un/marshal objects annotated with @XmlElement and @XmlAttribute annotations. I have a formatter class which provides two methods -- one wraps the marshal method and accepts the object to marshal and an OutputStream, the other just accepts the object and returns the XML output as a String. Unfortunately, these methods do not provide the same output for the same objects. When marshaling to a file, simple object fields internally marked with @XmlAttribute are printed as: <element value="VALUE"></element> while when marshaling to a String, they are: <element value="VALUE"/> I would prefer the second format for both cases, but I am curious as to how to control the difference, and would settle for them being the same regardless. I even created one static marshaller that both methods use to eliminate different instance values. The formatting code follows: /** Marker interface for classes which are listed in jaxb.index */ public interface Marshalable {} /** Local exception class */ public class XMLMarshalException extends BaseException {} /** Class which un/marshals objects to XML */ public class XmlFormatter { private static Marshaller marshaller = null; private static Unmarshaller unmarshaller = null; static { try { JAXBContext context = JAXBContext.newInstance("path.to.package"); marshaller = context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8"); unmarshaller = context.createUnmarshaller(); } catch (JAXBException e) { throw new RuntimeException("There was a problem creating a JAXBContext object for formatting the object to XML."); } } public void marshal(Marshalable obj, OutputStream os) throws XMLMarshalException { try { marshaller.marshal(obj, os); } catch (JAXBException jaxbe) { throw new XMLMarshalException(jaxbe); } } public String marshalToString(Marshalable obj) throws XMLMarshalException { try { StringWriter sw = new StringWriter(); marshaller.marshal(obj, sw); } catch (JAXBException jaxbe) { throw new XMLMarshalException(jaxbe); } } } /** Example data */ @XmlType @XmlAccessorType(XmlAccessType.FIELD) public class Data { @XmlAttribute(name = value) private String internalString; } /** Example POJO */ @XmlType @XmlRootElement(namespace = "project/schema") @XmlAccessorType(XmlAccessType.FIELD) public class Container implements Marshalable { @XmlElement(required = false, nillable = true) private int number; @XmlElement(required = false, nillable = true) private String word; @XmlElement(required = false, nillable = true) private Data data; } The result of calling marshal(container, new FileOutputStream("output.xml")) and marshalToString(container) are as follows: Output to file <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <ns2:container xmlns:ns2="project/schema"> <number>1</number> <word>stackoverflow</word> <data value="This is internal"></data> </ns2:container> and Output to String <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <ns2:container xmlns:ns2="project/schema"> <number>1</number> <word>stackoverflow</word> <data value="This is internal"/> </ns2:container>

    Read the article

  • C#: Parallel forms, multithreading and "applications in application"

    - by Harry
    First, what I need is - n WebBrowser-s, each in its own window doing its own job. The user should be able to see them all, or just one of them (or none), and to execute commands on each one. There is a main form, without a browser, this one contains control panel for my application. The key feautre is, each browser logs on to secured web page and it needs to stay logged in as long as possible. Well, I've done it, but I'm afraid something is wrong with my approach. The question is: Is code below valid, or rather a nasty hack which can cause problems: internal class SessionList : List<Session> { public SessionList(Server main) { MyRecords.ForEach(record => { var st = new System.Threading.Thread((data) => { var s = new Session(main, data as MyRecord); this.Add(s); Application.Run(s); Application.ExitThread(); }); st.SetApartmentState(System.Threading.ApartmentState.STA); st.Start(record); }); } // some other uninteresting methods here... } What's going on here? Session inherits from Form, so it creates a form, puts WebBrowser into it, and has methods to operate on websites. WebBrowser requires to be run in STA thread, so we provide one for each browser. The most interesting part of it is Application.Run(s). It makes the newly created forms alive and interactive. The next Application.ExitThread() is called after browser window is closed and its controls disposed. Main application stays alive to perform the rest of the cleanup job. When user select "Exit" or "Shutdown" option - first the browser threads are ended, so Application.ExitThread() is called. It all works, but everywhere I can read about "main GUI thread" - and here - I've created many GUI threads. I handle communication between main form and my new forms (sessions) with thread-safe methods using Invoke(). It all works, so is it right or is it wrong? Is everything right with using Application.Run() more than once in one application? :) An ugly hack or a normal practice? This code dies if I start a WebBrowser from the session form thread. It beats me why. It works however if I start WebBrowser (by changing its Url property) from any other thread. I'd like to know more what is really happening in such application. But most of all - I'd like to know if my idea of "applications in application" is OK. I'm not sure what exactly does Application.Run() do. Without it forms created in new threads were dead unresponsive. How is it possible I can call Application.Run() many times? It seems to do exactly what it should, but it seems a little undocumented feature to me. I'm almost sure, that the crashes are caused by WebBrowser component itself (since it's not completely "managed" and "native"). But maybe it's something else.

    Read the article

  • Object validator - is this good design?

    - by neo2862
    I'm working on a project where the API methods I write have to return different "views" of domain objects, like this: namespace View.Product { public class SearchResult : View { public string Name { get; set; } public decimal Price { get; set; } } public class Profile : View { public string Name { get; set; } public decimal Price { get; set; } [UseValidationRuleset("FreeText")] public string Description { get; set; } [SuppressValidation] public string Comment { get; set; } } } These are also the arguments of setter methods in the API which have to be validated before storing them in the DB. I wrote an object validator that lets the user define validation rulesets in an XML file and checks if an object conforms to those rules: [Validatable] public class View { [SuppressValidation] public ValidationError[] ValidationErrors { get { return Validator.Validate(this); } } } public static class Validator { private static Dictionary<string, Ruleset> Rulesets; static Validator() { // read rulesets from xml } public static ValidationError[] Validate(object obj) { // check if obj is decorated with ValidatableAttribute // if not, return an empty array (successful validation) // iterate over the properties of obj // - if the property is decorated with SuppressValidationAttribute, // continue // - if it is decorated with UseValidationRulesetAttribute, // use the ruleset specified to call // Validate(object value, string rulesetName, string FieldName) // - otherwise, get the name of the property using reflection and // use that as the ruleset name } private static List<ValidationError> Validate(object obj, string fieldName, string rulesetName) { // check if the ruleset exists, if not, throw exception // call the ruleset's Validate method and return the results } } public class Ruleset { public Type Type { get; set; } public Rule[] Rules { get; set; } public List<ValidationError> Validate(object property, string propertyName) { // check if property is of type Type // if not, throw exception // iterate over the Rules and call their Validate methods // return a list of their return values } } public abstract class Rule { public Type Type { get; protected set; } public abstract ValidationError Validate(object value, string propertyName); } public class StringRegexRule : Rule { public string Regex { get; set; } public StringRegexRule() { Type = typeof(string); } public override ValidationError Validate(object value, string propertyName) { // see if Regex matches value and return // null or a ValidationError } } Phew... Thanks for reading all of this. I've already implemented it and it works nicely, and I'm planning to extend it to validate the contents of IEnumerable fields and other fields that are Validatable. What I'm particularly concerned about is that if no ruleset is specified, the validator tries to use the name of the property as the ruleset name. (If you don't want that behavior, you can use [SuppressValidation].) This makes the code much less cluttered (no need to use [UseValidationRuleset("something")] on every single property) but it somehow doesn't feel right. I can't decide if it's awful or awesome. What do you think? Any suggestions on the other parts of this design are welcome too. I'm not very experienced and I'm grateful for any help. Also, is "Validatable" a good name? To me, it sounds pretty weird but I'm not a native English speaker.

    Read the article

  • Make interchangeable class types via pointer casting only, without having to allocate any new objects?

    - by HostileFork
    UPDATE: I do appreciate "don't want that, want this instead" suggestions. They are useful, especially when provided in context of the motivating scenario. Still...regardless of goodness/badness, I've become curious to find a hard-and-fast "yes that can be done legally in C++11" vs "no it is not possible to do something like that". I want to "alias" an object pointer as another type, for the sole purpose of adding some helper methods. The alias cannot add data members to the underlying class (in fact, the more I can prevent that from happening the better!) All aliases are equally applicable to any object of this type...it's just helpful if the type system can hint which alias is likely the most appropriate. There should be no information about any specific alias that is ever encoded in the underlying object. Hence, I feel like you should be able to "cheat" the type system and just let it be an annotation...checked at compile time, but ultimately irrelevant to the runtime casting. Something along these lines: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Under the hood, the factory method is actually making a NodeBase class, and then using a similar reinterpret_cast to return it as a Node<AccessorFoo>*. The easy way to avoid this is to make these lightweight classes that wrap nodes and are passed around by value. Thus you don't need casting, just Accessor classes that take the node handle to wrap in their constructor: AccessorFoo foo (NodeBase::createViaFactory()); AccessorBar bar (foo.getNode()); But if I don't have to pay for all that, I don't want to. That would involve--for instance--making a special accessor type for each sort of wrapped pointer (AccessorFooShared, AccessorFooUnique, AccessorFooWeak, etc.) Having these typed pointers being aliased for one single pointer-based object identity is preferable, and provides a nice orthogonality. So back to that original question: Node<AccessorFoo>* fooPtr = Node<AccessorFoo>::createViaFactory(); Node<AccessorBar>* barPtr = reinterpret_cast< Node<AccessorBar>* >(fooPtr); Seems like there would be some way to do this that might be ugly but not "break the rules". According to ISO14882:2011(e) 5.2.10-7: An object pointer can be explicitly converted to an object pointer of a different type.70 When a prvalue v of type "pointer to T1" is converted to the type "pointer to cv T2", the result is static_cast(static_cast(v)) if both T1 and T2 are standard-layout types (3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. Converting a prvalue of type "pointer to T1" to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value. The result of any other such pointer conversion is unspecified. Drilling into the definition of a "standard-layout class", we find: has no non-static data members of type non-standard-layout-class (or array of such types) or reference, and has no virtual functions (10.3) and no virtual base classes (10.1), and has the same access control (clause 11) for all non-static data members, and has no non-standard-layout base classes, and either has no non-static data member in the most-derived class and at most one base class with non-static data members, or has no base classes with non-static data members, and has no base classes of the same type as the first non-static data member. Sounds like working with something like this would tie my hands a bit with no virtual methods in the accessors or the node. Yet C++11 apparently has std::is_standard_layout to keep things checked. Can this be done safely? Appears to work in gcc-4.7, but I'd like to be sure I'm not invoking undefined behavior.

    Read the article

  • Linked Lists in Java - Help with assignment

    - by doron2010
    I have been trying to solve this assignment all day, please help me. I'm completely lost. Representation of a string in linked lists In every intersection in the list there will be 3 fields : The letter itself. The number of times it appears consecutively. A pointer to the next intersection in the list. The following class CharNode represents a intersection in the list : public class CharNode { private char _data; private int _value; private charNode _next; public CharNode (char c, int val, charNode n) { _data = c; _value = val; _next = n; } public charNode getNext() { return _next; } public void setNext (charNode node) { _next = node; } public int getValue() { return _value; } public void setValue (int v) { value = v; } public char getData() { return _data; } public void setData (char c) { _data = c; } } The class StringList represents the whole list : public class StringList { private charNode _head; public StringList() { _head = null; } public StringList (CharNode node) { _head = node; } } Add methods to the class StringList according to the details : (Pay attention, these are methods from the class String and we want to fulfill them by the representation of a string by a list as explained above) public char charAt (int i) - returns the char in the place i in the string. Assume that the value of i is in the right range. public StringList concat (String str) - returns a string that consists of the string that it is operated on and in its end the string "str" is concatenated. public int indexOf (int ch) - returns the index in the string it is operated on of the first appeareance of the char "ch". If the char "ch" doesn't appear in the string, returns -1. If the value of fromIndex isn't in the range, returns -1. public int indexOf (int ch, int fromIndex) - returns the index in the string it is operated on of the first appeareance of the char "ch", as the search begins in the index "fromIndex". If the char "ch" doesn't appear in the string, returns -1. public boolean equals (String str) - returns true if the string that it is operated on is equal to the string str. Otherwise returns false. This method must be written in recursion, without using loops at all. public int compareTo (String str) - compares between the string that the method is operated on to the string "str" that is in the parameter. The method returns 0 if the strings are equal. If the string in the object is smaller lexicographic from the string "str" in the paramater, a negative number will be returned. And if the string in the object is bigger lexicographic from the string "str", a positive number will be returned. public StringList substring (int i) - returns the list of the substring that starts in the place i in the string on which it operates. Meaning, the sub-string from the place i until the end of the string. Assume the value of i is in the right range. public StringList substring (int i, int j) - returns the list of the substring that begins in the place i and ends in the place j (not included) in the string it operates on. Assume the values of i, j are in the right range. public int length() - will return the length of the string on which it operates. Pay attention to all the possible error cases. Write what is the time complexity and space complexity of every method that you wrote. Make sure the methods you wrote are effective. It is NOT allowed to use ready classes of Java. It is NOT allowed to move to string and use string operations.

    Read the article

  • Async networking + threading problem

    - by randallmeadows
    I kick off a network request, assuming no login credentials are required to talk to the destination server. If they are required, then I get an authentication challenge, at which point I display a view requesting said credentials from the user. When they are supplied, I restart the network request, using those credentials. That's all fine and dandy, as long as I only do one request at a time. But I'm not, typically. When both requests are kicked off, I get the first challenge, and present the prompt (using -presentModalViewController:). Then the 2nd challenge comes in. And I crash when it tries to display the 2nd prompt. I have the bulk of this wrapped in an @synchronized() block, but this has no effect because these delegate methods are all being called on the same (main) thread. The docs say the delegate methods are called on the same thread in which the connection was started. OK, no problem; I'll just write a method that I run on a background thread using -performSelectorInBackground: NSURLConnection *connection = [[NSURLConnection alloc] initWithRequest:request delegate:self startImmediately:NO]; [connections addObject:connection]; [self performSelectorInBackground:@selector(startConnection:) withObject:connection]; [connection release]; - (void)startConnection:(NSURLConnection *)connection { NSAutoreleasePool *pool = [NSAutoreleasePool new]; [connection scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [connection start]; [pool drain]; } which should put every network request, and its callbacks, on its own thread, and then my @synchronized() blocks will take effect. The docs for -initWithRequest:... state "Messages to the delegate will be sent on the thread that calls this method. By default, for the connection to work correctly the calling thread’s run loop must be operating in the default run loop mode." Ok, I'm doing that. They also state "If you pass NO [for startImmediately], you must schedule the connection in a run loop before starting it." OK, I'm doing that, too. Furthermore, the docs for NSRunLoop state "Each NSThread object, including the application’s main thread, has an NSRunLoop object automatically created for it as needed. If you need to access the current thread’s run loop, you do so with the class method currentRunLoop." I'm assuming this applies to the background thread created by the call -performSelectorInBackground... (which does appear to be the case, when I execute 'po [NSClassFromString(@"NSRunLoop") currentRunLoop]' in the -startConnection: method). The -startConnection: method is indeed being called. But after kicking off the connection, I now never get any callbacks on it. None of the -connectionDid… delegate methods. (I even tried explicitly starting the thread's run loop, but that made no difference; I've used threads like this before, and I've never had to start the run loop manually before--but I'm now grasping at straws...) I think I've come up with a workaround such that I only handle one request at a time, but it's kludgy and I'd like to do this the Right Way. But, what am I missing here? Thanks! randy

    Read the article

  • Problems with passing an anonymous temporary function-object to a templatized constructor.

    - by Akanksh
    I am trying to attach a function-object to be called on destruction of a templatized class. However, I can not seem to be able to pass the function-object as a temporary. The warning I get is (if the comment the line xi.data = 5;): warning C4930: 'X<T> xi2(writer (__cdecl *)(void))': prototyped function not called (was a variable definition intended?) with [ T=int ] and if I try to use the constructed object, I get a compilation error saying: error C2228: left of '.data' must have class/struct/union I apologize for the lengthy piece of code, but I think all the components need to be visible to assess the situation. template<typename T> struct Base { virtual void run( T& ){} virtual ~Base(){} }; template<typename T, typename D> struct Derived : public Base<T> { virtual void run( T& t ) { D d; d(t); } }; template<typename T> struct X { template<typename R> X(const R& r) { std::cout << "X(R)" << std::endl; ptr = new Derived<T,R>(); } X():ptr(0) { std::cout << "X()" << std::endl; } ~X() { if(ptr) { ptr->run(data); delete ptr; } else { std::cout << "no ptr" << std::endl; } } Base<T>* ptr; T data; }; struct writer { template<typename T> void operator()( const T& i ) { std::cout << "T : " << i << std::endl; } }; int main() { { writer w; X<int> xi2(w); //X<int> xi2(writer()); //This does not work! xi2.data = 15; } return 0; }; The reason I am trying this out is so that I can "somehow" attach function-objects types with the objects without keeping an instance of the function-object itself within the class. Thus when I create an object of class X, I do not have to keep an object of class writer within it, but only a pointer to Base<T> (I'm not sure if I need the <T> here, but for now its there). The problem is that I seem to have to create an object of writer and then pass it to the constructor of X rather than call it like X<int> xi(writer(); I might be missing something completely stupid and obvious here, any suggestions?

    Read the article

  • How to debug KVO

    - by user8472
    In my program I use KVO manually to observe changes to values of object properties. I receive an EXC_BAD_ACCESS signal at the following line of code inside a custom setter: [self willChangeValueForKey:@"mykey"]; The weird thing is that this happens when a factory method calls the custom setter and there should not be any observers around. I do not know how to debug this situation. Update: The way to list all registered observers is observationInfo. It turned out that there was indeed an object listed that points to an invalid address. However, I have no idea at all how it got there. Update 2: Apparently, the same object and method callback can be registered several times for a given object - resulting in identical entries in the observed object's observationInfo. When removing the registration only one of these entries is removed. This behavior is a little counter-intuitive (and it certainly is a bug in my program to add multiple entries at all), but this does not explain how spurious observers can mysteriously show up in freshly allocated objects (unless there is some caching/reuse going on that I am unaware of). Modified question: How can I figure out WHERE and WHEN an object got registered as an observer? Update 3: Specific sample code. ContentObj is a class that has a dictionary as a property named mykey. It overrides: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"mykey"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } A couple of properties have getters and setters as follows: - (CGFloat)value { return [[[self mykey] objectForKey:@"value"] floatValue]; } - (void)setValue:(CGFloat)aValue { [self willChangeValueForKey:@"mykey"]; [[self mykey] setObject:[NSNumber numberWithFloat:aValue] forKey:@"value"]; [self didChangeValueForKey:@"mykey"]; } The container class has a property contents of class NSMutableArray which holds instances of class ContentObj. It has a couple of methods that manually handle registrations: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"contents"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } - (void)observeContent:(ContentObj *)cObj { [cObj addObserver:self forKeyPath:@"mykey" options:0 context:NULL]; } - (void)removeObserveContent:(ContentObj *)cObj { [cObj removeObserver:self forKeyPath:@"mykey"]; } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (([keyPath isEqualToString:@"mykey"]) && ([object isKindOfClass:[ContentObj class]])) { [self willChangeValueForKey:@"contents"]; [self didChangeValueForKey:@"contents"]; } } There are several methods in the container class that modify contents. They look as follows: - (void)addContent:(ContentObj *)cObj { [self willChangeValueForKey:@"contents"]; [self observeDatum:cObj]; [[self contents] addObject:cObj]; [self didChangeValueForKey:@"contents"]; } And a couple of others that provide similar functionality to the array. They all work by adding/removing themselves as observers. Obviously, anything that results in multiple registrations is a bug and could sit somewhere hidden in these methods. My question targets strategies on how to debug this kind of situation. Alternatively, please feel free to provide an alternative strategy for implementing this kind of notification/observer pattern.

    Read the article

  • How to reserve public API to internal usage in .NET?

    - by mark
    Dear ladies and sirs. Let me first present the case, which will explain my question. This is going to be a bit long, so I apologize in advance :-). I have objects and collections, which should support the Merge API (it is my custom API, the signature of which is immaterial for this question). This API must be internal, meaning only my framework should be allowed to invoke it. However, derived types should be able to override the basic implementation. The natural way to implement this pattern as I see it, is this: The Merge API is declared as part of some internal interface, let us say IMergeable. Because the interface is internal, derived types would not be able to implement it directly. Rather they must inherit it from a common base type. So, a common base type is introduced, which would implement the IMergeable interface explicitly, where the interface methods delegate to respective protected virtual methods, providing the default implementation. This way the API is only callable by my framework, but derived types may override the default implementation. The following code snippet demonstrates the concept: internal interface IMergeable { void Merge(object obj); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } void IMergeable.Merge(object obj) { Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } All is fine, provided a single common base type suffices, which is usually true for non collection types. The thing is that collections must be mergeable as well. Collections do not play nicely with the presented concept, because developers do not develop collections from the scratch. There are predefined implementations - observable, filtered, compound, read-only, remove-only, ordered, god-knows-what, ... They may be developed from scratch in-house, but once finished, they serve wide range of products and should never be tailored to some specific product. Which means, that either: they do not implement the IMergeable interface at all, because it is internal to some product the scope of the IMergeable interface is raised to public and the API becomes open and callable by all. Let us refer to these collections as standard collections. Anyway, the first option screws my framework, because now each possible standard collection type has to be paired with the respective framework version, augmenting the standard with the IMergeable interface implementation - this is so bad, I am not even considering it. The second option breaks the framework as well, because the IMergeable interface should be internal for a reason (whatever it is) and now this interface has to open to all. So what to do? My solution is this. make IMergeable public API, but add an extra parameter to the Merge method, I call it a security token. The interface implementation may check that the token references some internal object, which is never exposed to the outside. If this is the case, then the method was called from within the framework, otherwise - some outside API consumer attempted to invoke it and so the implementation can blow up with a SecurityException. Here is the modified code snippet demonstrating this concept: internal static class InternalApi { internal static readonly object Token = new object(); } public interface IMergeable { void Merge(object obj, object token); } public class BaseFrameworkObject : IMergeable { protected virtual void Merge(object obj) { // The default implementation. } public void Merge(object obj, object token) { if (!object.ReferenceEquals(token, InternalApi.Token)) { throw new SecurityException("bla bla bla"); } Merge(obj); } } public class SomeThirdPartyObject : BaseFrameworkObject { protected override void Merge(object obj) { // A derived type implementation. } } Of course, this is less explicit than having an internally scoped interface and the check is moved from the compile time to run time, yet this is the best I could come up with. Now, I have a gut feeling that there is a better way to solve the problem I have presented. I do not know, may be using some standard Code Access Security features? I have only vague understanding of it, but can LinkDemand attribute be somehow related to it? Anyway, I would like to hear other opinions. Thanks.

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • EMERGENCY - Major Problems After Perl Module Installed via WHM

    - by Russell C.
    I attempted to install the perl module Net::Twitter::Role::API::Lists using WHM and after doing so my whole site came down. It seems that something that was updated with the install isn't functioning correctly and since our website it written in Perl none of our site scripts will run. In almost 8 years of working with Perl I've never had any issues arise after installing a perl module so I have no idea how to even start troubleshooting. The error I see when trying to compile any of our Perl scripts is below. I'd appreciate any advice on what might be wrong and steps on how I can go about resolve it. Thanks in advance for your help! Attribute (+type_constraint) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0x9ae35b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Counter has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4d7718)', 'Moose::Meta::Attribute=HASH(0xa4df4b4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4dfb38)', 'Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa3dbdec)', 'Moose::Meta::Class=HASH(0xa4d7718)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'undef', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4d7718)', 'MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Counter') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 10 require MooseX/AttributeHelpers/Counter.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 23 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Counter.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4ea48c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+default) of class MooseX::AttributeHelpers::Number has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4f778c)', 'Moose::Meta::Attribute=HASH(0xa4f7804)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa4f8014)', 'Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa38b764)', 'Moose::Meta::Class=HASH(0xa4f778c)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'undef', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4f778c)', 'MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::Number') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 9 require MooseX/AttributeHelpers/Number.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 24 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3/Client/Bucket.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3.pm line 111 Net::Amazon::S3::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require Net/Amazon/S3.pm called at /home/atrails/www/cgi-bin/main.pm line 1633 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 require main.pm called at /home/atrails/cron/meetup.pl line 20 main::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/Number.pm line 0 Attribute (+type_constraint) of class MooseX::AttributeHelpers::String has no associated methods (did you mean to provide an "is" argument?) at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Attribute.pm line 551 Moose::Meta::Attribute::_check_associated_methods('Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Class.pm line 303 Moose::Meta::Class::add_attribute('Moose::Meta::Class=HASH(0xa4fd5c4)', 'Moose::Meta::Attribute=HASH(0xa4fdae0)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 142 Moose::Meta::Role::Application::ToClass::apply_attributes('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application.pm line 72 Moose::Meta::Role::Application::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role/Application/ToClass.pm line 31 Moose::Meta::Role::Application::ToClass::apply('Moose::Meta::Role::Application::ToClass=HASH(0xa5002d8)', 'Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Meta/Role.pm line 419 Moose::Meta::Role::apply('Moose::Meta::Role=HASH(0xa42a690)', 'Moose::Meta::Class=HASH(0xa4fd5c4)') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 132 Moose::Util::_apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'undef', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Util.pm line 86 Moose::Util::apply_all_roles('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose.pm line 57 Moose::with('Moose::Meta::Class=HASH(0xa4fd5c4)', 'MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi/Moose/Exporter.pm line 293 Moose::with('MooseX::AttributeHelpers::Trait::String') called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 10 require MooseX/AttributeHelpers/String.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers.pm line 25 MooseX::AttributeHelpers::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/AttributeHelpers.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute/Role/Meta/Class.pm line 6 MooseX::ClassAttribute::Role::Meta::Class::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute/Role/Meta/Class.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/ClassAttribute.pm line 11 MooseX::ClassAttribute::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/ClassAttribute.pm called at /usr/lib/perl5/site_perl/5.8.8/Olson/Abbreviations.pm line 6 Olson::Abbreviations::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require Olson/Abbreviations.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTime/ButMaintained.pm line 10 MooseX::Types::DateTime::ButMaintained::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTime/ButMaintained.pm called at /usr/lib/perl5/site_perl/5.8.8/MooseX/Types/DateTimeX.pm line 9 MooseX::Types::DateTimeX::BEGIN() called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 eval {...} called at /usr/lib/perl5/site_perl/5.8.8/MooseX/AttributeHelpers/String.pm line 0 require MooseX/Types/DateTimeX.pm called at /usr/lib/perl5/site_perl/5.8.8/Net/Amazon/S3/Client/Bucket.pm line 5 Net::Amazon::S3::Client::Bucket::BEGIN() called at /usr/lib/perl5/site_per

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • CodePlex Daily Summary for Monday, April 12, 2010

    CodePlex Daily Summary for Monday, April 12, 2010New Projects3 Hour Game Design Contest: The 3 Hour Game Design Contest is a programming contest for making simple games in 3 hours. 3 hours may not seem like enough time to make a game, b...BI Monkey SSIS ETL Framework: The BI Monkey SSIS ETL Framework is an ETL Execution, Control and Logging system for ETL projects using SSIS. It is supported by a SQL Server metad...Blend Sample Data Helpers: Helper behavior classes to generate sample images and data from Internet sources such as Flickr images. Bold TCP for Delphi 7: Open Sourcing the Bold TCP for Delphi 7.cfThreadingTools: This library project contains classes and extensions which will allow easy handling of multi-threaded UI-accesses.CuBiX_SDL: CuBiX_SDL : CuBiX est un projet personnel.Draglets: Draglets makes it easier for editors and CMS-developers to move and reorder content at their web sites. It's developed in ASP.NET, C# with WCF and ...DSQLT - Dynamic SQL Templates: DSQLT - Dynamic SQL Templates Use Stored Procedures as templates for dynamic SQL statements. Substitute parameters @0-@9 with values like objectna...Edtter: Edtter is a sample web application built on ASP.NET MVC 2 Framework. (Japanese Version Only)Forms Based Authentication Management - SharePoint2007FBA: This is my own update to Stacy Draper's FBABasic project for Forms Based Authentication in MOSS 2007. In additon to managing your fba user's roles,...Height Map to 3D World at XNA: Height Map to 3D World is a XNA project that developed firstly by Eric Grossinger and secondly improved by Karadeniz Technical University Computer ...HouseFly: A simple contact and note taking applicationITM 495 - iPhone Web App: School ProjectKaufleute: This will be finished laterLR: this project is about connecting toPowerShell Integration Services: A set of tools aimed at Extract Transform and Load tasks. Focused on getting the most common ETL tasks done without SSIS. Salient: A collection of, hopefully, useful libraries.Samurai.Validation: Extensible and flexible .Net object validation frameworkSamurai.Workflow: Samurai Workflow is a slim, easy-to-use workflow framework for WPF applications.SharePoint User Management WebPart: SharePoint User Management WebPartUrl shorte(ne)r: It's simple Url Shortener (like: http://tinyurl.com) Currently only Polish language is supported. In future will be provided multi language suppor...Yasbg: Yasbg (pronounced yas-bug) is Yet Another Static Blog Generator. It is made in C# using MarkdownSharp for markdown. Currently in alpha. New Releases.NET Extensions - Extension Methods Library: Release 2010.06: Added an universal approach for grouping extension methods like conversions. Conversion are now available on any data type (it's actually extension...3 Hour Game Design Contest: 3H-GDC mVII: This is the collection of game files for the 7th 3H-GDCB&W Port Scanner: Black`n`White Port Scanner 3.0: B&W Port Scanner 3 includes FTP Server detection tool, Better stability, Optimized memory management, Saving & Opening Result sets ... and more new...BI Monkey SSIS ETL Framework: Framework v1 Alpha: This Alpha release is not fully tested and some functionality is not operating as intended.Bluetooth Radar: Version 1.7: UI Changes Device UserControl Randomly placed devices.BugTracker.NET: BugTracker.NET 3.4.1: For the tasks/time tracking feature, added a way of viewing all the tasks at once, not just the tasks for one bug. Also added a way of exporting a...cfThreadingTools: cfThreadingTools 0.1.1.8: This is the first public available release. Following items are included: BaseTools-class which allows thread-safe setting of properties and callin...DeepZoom Pivot Constructor: DeepZoom Pivot Constructor v0.1: This is a test release of the library platform - Targets .NET 3.5 No samples yet, etc., but it works well :-)DSQLT - Dynamic SQL Templates: Initial release with License Included: nothing changed but license print procedure included the zip file contains database backup SQL script readmeForms Based Authentication Management - SharePoint2007FBA: SharePoint2007FBA 1.0.0.0: Downloads for the Project solution and the WSP package. Please read the Setup Guide. If you are unfamiliar with setting up Forms Based Authenticati...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET version 0.3: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-widget-version-03.aspx For installation instruc...Framework Detector: FrameworkDetect Support .NET 4 v2: FrameworkDetect Support .NET 4Happy Turtle Plugins for BVI :: Repository Based Versioning for Visual Studio: Happy Turtle 1.0.46860: This is the second beta release of the SVN based version incrementor. Please feel free to create a thread in the discussion tabs and provide feedb...Height Map to 3D World at XNA: 3DWorld: Just open .rar file and extract it any folder and run Proje2Dto3D.exe file.HTML Ruby: 6.20.2: Removed rubyLineSpace option Improved options panel Fixed ruby text font-size rendering issue with complex ruby annotation Removed more waste...HTML Ruby: 6.20.3: Removed unused code Temporary partial fix for Firefox 3.7a4pre nightly buildHTML Ruby: 6.21.0: Added support for current HTML5 ruby annotation format. All ruby annotations are converted to XHTML 1.1 complex ruby annotation.Kooboo HTML form: Kooboo HTML Form Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Menu: Kooboo CMS Menu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Meta: Kooboo Meta Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo PageMenu: Kooboo CMS PageMenu for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Kooboo Search: Kooboo CMS Search module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50212: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Rawr: Rawr 2.3.14: - Rawr3: Tons of fixes for Rawr3 compatability and UI. - Significant performance improvements all around. - More fixes and improvements to Wowhea...Rich Ajax empowered Web/Cloud Applications: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with enhanced perfor...SharePoint User Management WebPart: User Management Web part 1.0: Most of the organization have one SharePoint Site which is configured with windows authenticated which is for internal employees having AD authenti...SkeinLibManaged: SkeinLibManaged 1.1.0.0 (Beta): This is the compiled DLL with XML documentation, so there should be plenty of context sensitive help and Intellisense. This is the Release version,...VCC: Latest build, v2.1.30411.0: Automatic drop of latest buildVFPX: Code References 1.1 Beta: Visit the Code References Info Page for complete information about this release.VisioAutomation: VisioAutomation 2.5.0: VisioAutomation 2.5.0- General cleanup/bugfixes - Many low-level changes the the VisioAutomation extension methods - these are far fewer now - This...Visual Studio DSite: English To Spanish Translator (Visual C++ 2008): A simple english to spanish translator made in visual c 2008, using the Google Translate API.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.10.00: Whats NewFile Browser: Inherits Folder Permissions from DotNetNuke Updated the Editor to Version 3.2.1 revision 5372 Added CkEditor jQuery Adap...Web/Cloud Applications Development Framework | Visual WebGui: 6.4 beta 2: The first fully featured version of Visual webGui offering web/cloud development tool that puts all ASP.NET Ajax limits behind with unique develope...WPF Data Virtualization: 1.0.0.0: First ReleaseYasbg: Yasbg Alpha: ReadmeYet Another Static Blog Generator is a command line utility that generates static html files for blogs. Currently, it is NOT feed enabled. I...異世界の新着動画: Ver. 10-04-12: ニコ生の仕様変更に対応 アンケート時間の設定追加Most Popular ProjectsWBFS ManagerRawrASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsRawrnopCommerce. Open Source online shop e-commerce solution.AutoPocopatterns & practices – Enterprise LibraryShweet: SharePoint 2010 Team Messaging built with PexFarseer Physics EngineNB_Store - Free DotNetNuke Ecommerce Catalog ModuleIonics Isapi Rewrite FilterBlogEngine.NETBeanProxy

    Read the article

  • CodePlex Daily Summary for Tuesday, November 15, 2011

    CodePlex Daily Summary for Tuesday, November 15, 2011Popular ReleasesTHE NVL Maker: The NVL Maker Ver 3.10: 3.10 ??? ???: ·????????? ·????????? ·???“TJS”?“??”“EXP”?????“???”,???????? ·???“????”???,???????@if~@elsif~@else~@endif????? ·TJS????????? ·???????????else?endif??? ??: ·???FantasyDR?????????Wizard.exe(?????:http://code.google.com/p/nvlmaker-wizard/) ·KAGConfigEx2.exe??(?????:http://kcddp.keyfc.net/bbs/viewthread.php?tid=1374&extra=page%3D1) ·??????????skin??? ????: ·mapbutton????EXP??(??macro_map.ks) ·??????????AnimPlayer.ks?system????(??????AnimPlayer.ks???macro.ks) ·??????????????,?????...CreateHandouts: Latest Version: Latest VersionSQL Monitor - tracking sql server activities: SQLMon 4.1 alpha2: 1. improved object search, escape special characters, support search histories, and remember search option. 2. allow user to set connection time out. 3. allow user to drag & drop sql text or file to editors.SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.4: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsWeapsy: 0.4.1 Alpha: Edit Text bug fixedDesktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbFluent Validation for .NET: 3.2: Changes since 3.1: Fixed issue #7084 (NotEmptyValidator does not work with EntityCollection<T>) Fixed issue #7087 (AbstractValidator.Custom ignores RuleSets and always runs) Removed support for WP7 for now as it doesn't support co/contravariance without crashing.RDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.New ProjectsAFNC: testArithmetics: arithmetics for silverlight use note pattern by time streamAzon.Library: A collection of extensions, static helpers, AOP attributes. More will added as the project will go on.Chat TextBlock Control: A windows phone 7.1 control Resemble those chat balloon textblocks in the SMS appDiamond Framework: Diamond Framework an Common framework for Diamond Group.DNN Social Helpers: DNN Social HelpersDragon: DragonEasy Video Cropper: A simple application to make cropping videos easy for anyone. - Automatically detects black lines - Uses FFMPEGFluent Resource Mapper: This project aims to develop a framework to assist the internationalization of software using the paradigm Convetion over Configuration.Fully Observable: This project is to create an improved set of observable collections. It provides notifications for when items inside the collection change as well as when the collection itself changes.grpcmnq: no summary at allMathTool: Math tool for silverlight we plan will heve three point .matrix .differential equation .equation of locusnopCommerce Buckaroo payment provider plugin: This is a payment provider plugin for the dutch payment provider BUCKAROO. This plugin is developed and tested for nopCommerce version 2+ Phoenix MVVM+C Framework: Phoenix MVVM+C Framework PowerLib: PowerLib extends system .net library.RDRemote: This utility allows to enable the Remote Desktop connections from a remote computer using WMI.Sencha Touch Mini Workflow Framework: A workflow framework for Sencha Touch mobile apps including automatic component management ShWP: helper library for Windows PhoneTimer, Cronômetro e Despertador: Projeto desenvolvido no curso de extensão de C# da UFSCar SorocabaUtilityLibrary.Ajax: AjaxUtilityLibrary.Email: emailUtilityLibrary.FormBase: UtilityLibrary.FormBaseUtilityLibrary.Http: UtilityLibrary for HttpWebRequestUtilityLibrary.Ormapping: ormappingVoiceModel: VoiceModel is a project which make it easier to develop VoiceXML applications using ASP.Net MVC with Razor. It uses the MVVM (Model-View-VoiceModel) design pattern to abstract the voice application to a higher level. It is developed in C# and Razor.WebSite.Request: WebSite.Request launch web request (via XMLHTTP) on website. Use, for example, to make initial request to sharepoint URL and escape "slow first request" problem.Where's my lei, man?: Where's my lei, man?Zombsquare: Aplicación de ejemplo para Windows Phone utilizada en el Windows Phone Roadshow realizado en España en 2011, en esta solución podras encontra ejemplos de: -Diseño en Blend -BingMaps -GeoLocalizacion -Realidad Aumentada -Converters -Mini-trivial -Serialización de objetos ... resistir un apocalipsis Zombie...

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >